var/home/core/zuul-output/0000755000175000017500000000000015145411453014530 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145425450015476 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000326407315145425345020275 0ustar corecore*ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB >k"mv?_eGbuuțx{w7ݭ7֫B% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b2 ܨ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN# ZF͹b,*YVi+$<QMGhC}^}?BqG!(8l K3T[<~6]90}(*T7siv'=k 9Q2@vN ( R['>v*;o57sp$3ncx!>t®W>]tF-iܪ%GYbaRvHa}dkD̶*';ک|s_}8yj,('GrgTZ'U鋊TqOſ * /Ijo!՟8`"j}zӲ$k3jS|C7;A)͎V.r?t\WU1ojjr<~Tq> `=tJ!aݡ=h6Yݭw}?lѹ`f_" J9w4ts7NG GGG]ҡgc⌝M b/Ζlpah E ur C&`XR JcwB~R2EL9j7e\(Uё$׿atyХ?*t5z\+`/ErVQUxMҔ&ۈt.3;eg_O ξL1KiYLizpV:C5/=v-}҅"o ']쌕|tϓX8nJ*A*%J[T2pI1Je;s_[,Ҩ38_ь ͰM0ImY/MiVJ5&jNgBt90v߁R:~U jځU~oN9xԞ~J|dݤ߯R> kH&Y``:"s ayiBq)u%'4 yܽ yW0 -i̭uJ{KưЖ@+UBj -&JO x@}DS.€>3T0|9ē7$3z^.I< )9qf e%dhy:O40n'c}c1XҸuFiƠIkaIx( +")OtZ l^Z^CQ6tffEmDφǽ{QiOENG{P;sHz"G- >+`قSᔙD'Ad ѭj( ہO r:91v|ɛr|٦/o{C Ӹ!uWȳ)gjw&+uߕt*:͵UMQrN@fYDtEYZb4-UCqK٪L.2teB ˛"ո{Gci`du듎q+;C'16FgVlWaaB)"F,u@30YQg˾_YҊŏ#_f^ TD=VAKNl4Kš4GScѦa0 J ()¾5m'p/\խX\=z,Mw˭x:qu礛WԓL!I xӤ1(5AKRVF2ɌУլ F "vuhc=JS\kkZAY`R"Hr1]%oR[^oI]${&L8<=#0yaKL: JJl r;t#H+B|ɧJiM cm)>H=l}.^\ݧM<lu Y> XH\z:dHElL(uHR0i#q%]!=t_쾋-, vW~* ^g/5n]FhNU˿oۂ6C9C7sn,kje*;iΓA7,Q)-,=1A sK|ۜLɽy]ʸEO<-YEqKzϢ \{>dDLF amKGm+`VLJsC>?5rk{-3Ss`y_C}Q v,{*)ߎ% qƦat:D=uNvdߋ{Ny[$ {ɴ6hOI']dC5`t9:GO: FmlN*:g^;T^B0$B%C6Θ%|5u=kkN2{'FEc* A>{avdt)8|mg定TN7,TEXt+`F P |ɧ<Ғ8_iqE b}$B#fethBE;1"l r  B+R6Qp%;R8P󦟶Ub-L::;Ⱦ7,VW.JE:PgXoΰUv:ΰdɆΰ (ΰ0eTUgXun[g, ׽-t!X򴱞_aM:E.Qg1DllЊE҉L ehJx{̗Uɾ?si&2"C]u$.`mjmƒVe9f6NŐsLu6fe wkىKR%f"6=rw^)'Hz }x>1yFX09'A%bDb0!i(`Z;TyֻΗ|ִ0-6dAC5t[OM91c:VJR9&ksvJ;0ɝ$krogB= FYtЩOte=?>T&O{Ll)HClba1PIFĀ":tu^}.&R*!^pHPQuSVO$.KMb.:DK>WtWǭKv4@Va3"a`R@gbu%_J5Ґ 3?lm$K/$s_. WM]̍"W%`lO2-"ew@E=! I,($F{ձ7*Oy 6EK( EF #31J8mN .TTF9㕴/5~RxCe,&v3,JE- ZF5%Da,Gܠ*qI@qlG6s푻jÝ$ >8ȕ$eZ1j[h0SH,qf<"${/ksBK}xnwDb%M6:K<~̓9*u᛹Q{FЖt~6S#G1(zr6<ߜ!?U\(0EmG4 4c~J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%oU׆]YLz_s߭AF'is^_&uUm$[[5HI4QCZ5!N&D[uiXk&2Bg&Ս7_/6v_cd쿽d@eU XyX2z>g8:.⺻h()&nO5YE\1t7aSyFxPV19 ĕi%K"IcB j>Pm[E[^oHmmU̸nG pHKZ{{Qo}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PV?Yg"ź)\3mf|ܔMUiU|Ym! #'ukMmQ9Blm]TO1ba.XW x6ܠ9[v35H;-]Um4mMrW-k#~fؤϋu_j*^Wj^qM `-Pk.@%=X#|ۡb1lKcj$׋bKv[~"N jS4HOkeF3LPyi︅iWk! cAnxu6<7cp?WN $?X3l(?  'Z! ,Z.maO_Bk/m~ޖ(<qRfR"Au\PmLZ"twpuJ` mvf+T!6Ѓjw1ncuwo':o gSPC=]U҅yY9 &K<-na'Xk,P4+`Þ/lX/bjFO.= w ?>ȑ3n߿z,t s5Z/ Clo-` z?a~b mzkC zFȏ>1k*Dls6vP9hS  ehC.3 @6ijvUuBY hBnb[ Fr#D7ćlA!:X lYE>#0JvʈɌ|\u,'Y˲.,;oOwoj-25Hݻ7 li0bSlbw=IsxhRbd+I]Y]JP}@.供SЃ??w w@KvKts[TSa /ZaDžPAEư07>~w3n:U/.P珀Yaٳ5Ʈ]խ4 ~fh.8C>n@T%W?%TbzK-6cb:XeGL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bHȌsK+D"̽E/"Icƀsu0,gy(&TI{ U܋N5 l͖h"褁lm *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9pM^WϋQf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,* ] O;*X]Eg,5,ouZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrO7E[!gJ^.a&HߣaaQÝ$_vyz4}0!yܒ栒޹a% Ŋ X!cJ!A\ ?E\R1 q/rJjd A4y4c+bQ̘TT!kw/nb͵FcRG0xeO sw5TV12R7<OG5cjShGg/5TbW > ]~Wޠ9dNiee$V[\[Qp-&u~a+3~;xUFFW>'ǣC~방u)т48ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$wMme_e1NHr~no\戔-QB EM;oH$$]?4~YrXY%Ο@oHwlXiW\ΡbN}l4VX|"0]! YcVi)@kF;'ta%*xU㔸,A|@WJfVP6`ڼ3qY.[U BTR0u$$hG$0NpF]\ݗe$?# #:001w<{{B\rhGg JGIެE.:zYrY{*2lVǻXEB6;5NE#eb3aīNLd&@yz\))H;h\ߍ5S&(w9Z,K44|<#EkqTkOtW]﮶f=.*LD6%#-tңx%>MZ'0-bB$ !)6@I<#`L8턻r\Kuz*]}%b<$$^LJ<\HGbIqܢcZW {jfѐ6 QڣPt[:GfCN ILhbB.*IH7xʹǙMVA*J'W)@9 Ѷ6jىY* 85{pMX+]o$h{KrҎl 5sÁbNW\: "HK<bdYL_Dd)VpA@A i"j<鮗 qwc&dXV0e[g#B4x╙✑3'-i{SEȢbK6}{Ⱥi!ma0o xI0&" 9cT)0ߢ5ڦ==!LgdJΆmΉO]T"DĊKٙ@qP,i Nl:6'5R.j,&tK*iOFsk6[E__4pσw=͠qj@o5iX0v\fk= ;H J~,,r%Rwó^;n1z"8 P޿[V!ye]VZRԾ|“qNpѓVZD2"VN-m2do9 'H*IM}J ZaG%qn*WE^k1v3ڣjm7>ƽl' ,Τ9)%@ wl42iG.y3bBA{pR A ?IEY ?|-nz#}~f ‰dŷ=ɀ,m7VyIwGHέ 2tޞߛM{FL\#a s.3\}*=#uL#]  GE|FKi3&,ۓxmF͉lG$mN$!;ߑl5O$}D5| 01 S?uq6cl]M[I5'ոfiҞ:Z YՑ"jyKWk^dd@U_a4/vvV qHMI{+']1m]<$*YP7g# s!8!ߐ>'4{7/KwΦθW'?~=x0_>9Hhs%y{#iUI[Gzϸx7OnuKRv'm;/~n-KI`5-'YݦD-!+Y򼤙&m^YAKC˴vҢ]+X`iDf/F?RJ>FIY*%5Hg}3Ď89؟N/pgÞ tJXB-Gjkٶ 3Gzp؍H|*cyp@\첹,[up`uV,\KCB\qGiW痃[?i?S{eϻl71X:݌>EEly(*SHN:ӫOq{{L$?Q{϶(F_Ej>3mqfΤP-j)H˧&8?a?2xĐ+EV؍x0bv6 fd1^ 2ӎԥ sZR cguO/bn/34'h9Dݥ:U:vV[ 'Mȥ@ەX^$aj.m.6dHҝdVsVokwW&4*H'\ d$]Vmr달v9dB.bq:__xW|1=6 R3y^ E#LB ZaZd1,]ןkznxtK|v+`VZ3JϧC^|/{ś}r3 >6׳o I;.K*!<+"eK5c&`X:#;@B@[(K44sBFu M.M~X+EǬ.ťqpNZܗÅxjsD|[,?_4EqgMƒK6f~oFXJRF>i XʽAQGwG% C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$?D)~?wy,u'u()!d}uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|G)Q`rkKyt1?[ˋZ5NhfӛŮ Qu8Y4?KK?2_~3\z=y}^D.ݮl`p48io^.š{_f>O)?J=iwӑ؇n-i3,1׿5'od gP\|bЯ݃5vM Wi>rB$W 'MlΏf֚ZnB V,'¶6!=8?8[Y|-leǪzd;p-s~GM>e:9[v\:P 8'k01Q1jbX)/ΏL+ΆjBUx~Ga9Z"Q8_wjTLRˀtB LV+BT҂ll魳cf[L̎`?rKS٭ (J[(6 b F? Zhǖ6[|E7e۰ϕ{ 6k!x^>& ^7 l 2Jj.Щ LkJu\!`0);Sak$Vfp~GNOI[ 1j28 Ce |M>8l WIfr\q4|UkC.r`qLϰ xr)~l-ɩܿ*_ %xccDa)U h :R.qɱ]u$ơI8>^V Ю BLq~z&W0o BLqfx9y>9244ANb n\#X>Y`bb*h5)(*Fza^ sh6*Bz} Q\U)+u>SzU!g:0r5aM`"Ǒm O\C!ZDbjKM%q5U}(>Hm 2z=Uh^&hBk X%t>gY #)vǷOV'| d1 =_"8WxX§p,vܰ|X-LE7 |-;F`JLw9|fb94Nu ߏ3j׾,$'`5HWc\4X: P/.%d1ؑHͮ4W\hx锎`qcU!m}xA^jc5?Ua,X nl^Cv.'A$ƃǣkoA`d;_/EZ~'*"ȜI*Dmƣ˳bKg^saٖͭ̍*tPu*9aJߨ ;3It+w;3OgCX}kUG⧘pvzzV 2'Dco\:^dn FF7a).AH ߚ§bgT^t fT~HQ\Ѝpks#VHoW! ,/ `_~cB|A1Yg}~$H;~3/)N.oW{P_x}"?^'SO_2{uomLaF݄oW".Uy#㽾bz*OZܗWN]T*Suw{[\?^^'gğnAqCoO-'SsOq?'~BmmK2/@DJt=xL@5M8G0Z,RS UY l1O,dE~ZԘϘeΏe|B,f".<[IK1`b $W1Nr Qz?ۧZ]/G+qYcYl YjD$kvߒToId U$dS:,֢ U``%ĀVucK[bd-'Ó5hjsah. Z|ђ8 CGV[5xMIWN ?Yt>@ Vy`D~ևlŁQLLkY ZPKoa_u` BI#Dr) "h ig ۃmX-qm>6&XG7qm#ނtҒ[JI!{q.lzDV|lk'{d\+ :KXӄc)?W`*|\v aVT0":V^.YEs5Ȭ N *7{!fRБBSۘŽ Er/IG/U}APQT]|XN X]GbKjKdpL U6f¹ RT65Ovc `t۸ƅ  `1/.@ա06VNO8VGpLN@KgjyK Wq1egM+ I-*F~L!G"LD&,U 6tGd#R*c ^tSLjnKS9 Ȼ \ ~>3}rcsL6&ԙy=Ufl1#Xھۘ;R;+[E&Xi43غR 6 ݹJ$)}T@ nSܖ*X#rv6*+nv#ΖK W/Pll{f_WJ|8(A ä>mlN"jN;/-R%~ {_'##AA:K`uih _F% [M۴"qkj3r/)_x.8ͼ8# `zm~.-| R/0\jǺb>hҌ*@ nBID "!!*7[S4޷V`NBr^J |r3t~T)=UF"tsa$U>tLΦÔX3rFrcC0 X%E+o*ƾtNU*~-zߞ_EAV$=i[hm"roe5jq蛴& i1;V0eOޟ4ccc2J1TN!7q;"sդsP[ kW`u yg[~S [j3/sE..5"ka]+n!e߯lɹL V3OK\aΕ [Q =rvQu0 M.1%[vRat/ Px"3Pc /;?& PW*3'X li7v-W&)cX |]O;C%shjP CVrƥj+eAld#=&stVDƖϧ;INOkx%':WOBJ0j0dUo=rh9*YȴU Q̸*E_%&UKjsT*?_ucD-~VA.ȍfXLE4atGS1px#SCMFG˦NJPYLX%ܠꡗhl}i9f?q9bX-E'V"mf"":ĦK9kč-~U #`uVi<ѹ @˿ޗ~7g%| Ug[؂2 KvQ8 I7D Oj1;V||`Us%Q/)L͟^ CLu+."T#yvHhlكƼE-X/ 2ze31cDBeWe!A/.V4cbpWaPBIpqS>(lɣg3K?e Z?ڡVSZMcpnqL 2D?mzq.[~;D
b~-_f8dBմVs6傊וzF"daeY(RY+q%so9Xu>sE~ ac9} x. zjC.ĵWg󵸊y!U:0C*ucѡv8\[|s L-y{-K?_a2=c5$!SzF[q}2.R3`Hz-2}Ε UT `u6%OE-56+0-׎/Xŷ%r׽#nl-ߑst1S%uTڪ?>={6ݘ)|Ը>U{ѹ .MVfzd0O?7- }|>\T%9dp -*Nu6Y27AMҪ173M:Uenj\WOquY MQ|Uin-J*vuqoroZTOww7b/~\TV<F=]ǻ{Us_:h/AC;?_ϾqŽ6xMY(=ͯl~l8V0٩T zL{Ac:&EXAvL7 8UgKGlT)*7p{Pgi-b)>U}m*_2,@*G{}Z${˪Avyq9Ulq.U< AX9@: Ih~" "i 8U 7b,Se8l'k6m+뿢%|ˌgn='i:s;H)l]ed-(fr4ы7,Uȿ\iHxwmF9bԤx` > b' ]Lw^wBB=\NJ[=Xad!n/eDoOtX&wm?TJ~C}*ĸŖE~W@;;KJhT.sK0ȾףU'eSwd"гA>`Q`AP#6$ЈDfav]zqŴëáw (-TSkY%e 62{>cp]GƞY,d<,ЏU$LnYv"&{̖ƞ5jg4d̛!a%?LTWgJ%R:nMUy(Yd-ɬ.eQ|4|{X/祺QӴ!|S9F4MPYT T(M"UPVѫI5)d)`XZM S*z&}q5 w/BɢFyXjY]Uv`,|K{]z<&q^N0$V ;LZ6--ujDM5['aua&@f* W2N~OXe5I _A.1:g~at^{8Ћ[v,\=/l e O`JfQL:-'<<:ylOM6OG5gї2\8AGwHd^ Z2q]t&T_st?slDkÈe9i2\Yߘ7^O~ `&%ey,>8KR8ڛU_ջD3:iB͗xQ; (=t@J #MY76uޕ 3 ##ۺsNҼ52A-HW_%xel 9pqvܨM9:u8 o1lu96$gkg24}_vp)7-so&ɼ-)'UBN>1WE ׼){" (-˛~EP>8 ѐ,RJ<]^ӇpwE7y)֋ aY1d eϒZd-D-N}.2GBS܋Ӯ`zZѯʛ2L0M›i6K\M[|鴭8&e3gT-P2KnJM_0"Q`բ_N)T\'ϯq9iV{.my{ڈ»$TD. }Eu%S1B\:`\nn?ȥBϟH ߠxWFmg[)2Dwğ߿0dap|dNQEqi~"*+ x|57 ö7E,7]#MI}|L]{HtOίTf\`WGL+ `؟^ЂҬHL+L5yޤ=rM]9_SUMUXi|,"i9DYYQ uRfQ.Ds-= B؝J>#u&v|UBvM`q݈T{,en% {;AU㯼/i+Lp/tй9cm$, D4G/Y'QF\W;`9G>G(<L)~Nbq. :CALig_tX/t(UR])W!nZ[򦝧@&Z\ V_HNoi}Hl[H(-捭҉8;gdd V aD<|Œ#f̊v0gt_4FW_X}=3q!N͉07-Ktwm.o 9$]{۳uwGnm)n,)*ÍVkH6H_tS.)D*hsPBbrϷuVUsPl#$|=~:Pm/d 0[65պ!^ap4eE-.hР@c0 _=T,>sz~jPV0a0'%oI֮}; שLDJpS*.eV%{p]};2{B<0}.MR .j+#/ LoPtK0Q6B<]cĥQԜ.#43yS+ L! +BLJw ][&USP&ќ6dLFdnHx-.< 7L&*gL3Q[ސèhsW'̗ft[ ]6v13k12lۗZnß"/zvtx6 Օ7i~}rvv]7z+1P!2j}h,2CNZoUXY{,2R" `Omk lEӭ^Y`u41ו!6H-KWhMhf |XM1m{.]!6MԺMT̮nlxn*,dԷFRi?To @΂qh f4H<na㴩۱M Z^2a6H~BX |\{>:.DZԃlyh[/s[u+t$D.ڲ{cXn.߽j;> t[^Ky E[p"$YEwa2Ǜ2Limm{Wx.^yzwF -_=c3I^]&qsfy;S.Ir*pGvhIny4w4d|' {a9G[k{_mmkmhF3jSӐxXZHn;m \q,]ˋ#_v>gA>%z" ,+t]yqwMзWw5 9zYf%5p|[k;}6]m kK*E@5@2bw܄Xr-]E OGTC$\iK݆{(@۝vӝ),!J;H՞Tge, R6(DnDڨ3|75=1xJa,ցzg@ V}2\4i:@;x]N= ݉ hGA,`}=ΥIedNYxFBEC-^C2&i._b=l,yՇi 7gqb7AӪ{ʛ ƥ3,F$mdDyz ut _H=\55Z=f^i>2LA'x o? s߮ 6n\Bous#9{EB@"0RaqU(R\\w"ŎGyw9TC?淾 XŒ]Hٻ޶qlUV6I=) XI ffsiȒhו줹|{)ɒxPIݙٝ$HȺt$:(hZ\K!ߜWrq~HƑ m" X(GC(nkhV4 $jۆ w{ĥ9*,&_/iZ KqbX R9 #WKH6и/mM3[|/c;v_9ҐB+ׄM%Mq2Lxm޸-{ە U& |SQ-D}܅vG0哳;7r,W2 '̣*Uiw}N~Zn'C(Wu?иlmE W dNp¹F=vCGn66>4Ue%PǢq>us#abx|pPGPH"<5)\kŷy/o8kM@?a bG݃+bRY:Q~ T[(zN<0nYb=V"jר`Br;yс?OхћׇG?e|v׌u::p\D E":b(8`؋gp( A/kRn8S,˻L4||Z5w!{A?5-_ΫhX,ZpvqҮ vM[E#UEaSmaƾL#-?/@͋V[a^c%#w, ,+&j@&wx5Y9n;@P8$QBchp @D#0Uբpm-XaJ>\#$ų;mE85bJ;R κD7ՕD9v^m`hh9` CZIH|¹sC\V wbrA>[r:w% hܟ iX[0 ij^dʶ//V/1+4r<65BCݎ6At=QPTrjb ۮ߁eB k;cH `qT95' K.C@#FyR+j7Ѹޤ s+~LP)`m4e=R3#FIEtLp;fA\#Dd'kyGrP} LO"]'9;@>?G6kuƗWXMubuv!/mF"e] cu$ ]Xnwf4 iׯ[踺kS W6=9_#akʪ to^ I qaS@Qv:HhX,u2:sO; PX [/OA8v^B6-?2;^L8q}^,Ù@(x<~0o/xͯSۮ˯@2y(hBOlV$tMpP;N k'F"S5pU?D2Er#h! N$#[d+2`Nyf:ݺVZWO2螆~ܓYfdM߆YE*kZnFfH͒ӀZDewљET5Z'rCqȫ2".#`ҩdRH 7āLpuuO76K 15)7M8<>'ǣWK$G ,0S7yxOx̵=:\'sIL1#ǣWӂjB\@NF9q6¶pGkX`݀($3T ́(3SNo\۶EޯW_sUEg# 1d͈=1Ae~^(A"Obރ%[W5[)e8Vg~AƮ%>[H"0&O-# p^m۱AG-ٌ߯6B95l 7\ -3K m$JxL*`V,8u_jʨ\IpEY=<9)t:q74m7 ]"7v0disRAQ' ܶ|rtv)88\ GtxzL/G r ~vs6* yo~73 l4O{TGpHcU%6%x(^_|ytz2Njkgt I@NPʮa0ϕtw-@ګ"%ף~dC |f 'xzQ gYHLNJi.5Ø&flrt``0th(X)`u3B'P\~ZrWa=D$Y} d!8K~ -e3h Fk,v i6XmJ<} 0el[k֖s{Kk5,lޚͨ|V4|u3.^&ӫQ: R4fh:`Bro9h;U^CJ8*,LrL`jzHT93|FN=j9{H\ٽo{Q3I}ϡ{ 0 wLJqdI(T]{p~DW}jpv#i`hEB1HM>~W. Жy?qK"20"o0t~ٯ9n-h[/r$A ~YDyMsxei$Iy4|FI?Q J( MhÇ \ߍBZHH}_T?\R[V>DcWJ+`~zUV90OMd.FESXy" jeICo.2L,J _<\1?d16┎dv,^Y=MX*b&W{;ntݐ#X~yZfҏ^ǛpO?$ߋP&~)1CEfswY%< qNΓhcUW RY6K1н1(=  Eķܽ6LbS %R9Y|L.Ө}#ؽJxV2Ȩޭ@z =aIi11 M4I)l6n\4Z 0s z_j=+$ Y}ZyxݳY~}+?6g '3EZΖ-E:XZRLIp:/f,8.uUT(+zJkaݥ>̡,NWT,N-$Dp?%kd.=yDI>22Ӊ-:[ ;orS9BIY3\k.Qps4fL$$2p(SG7 \&)KTXƒ9QP=r|é5ss`弫3KT^/*CQg U% A a@iڇ~ { Gr([kro,ʵdla`!"e=.|aU<(H1,]1=<:|P)HK6#=yY!Wىg^BVOu k9îỳmj9jvh9ܽQS>>WT&痘/E'[HJS?p\Tm~R"3z$:0,55,sL݄1w0A=էCm1)-huQ/ރЙE("t H!wă_jEd [=v(R[Y6eF3w웩JsX{Pƾ7A׃+*f̦zjL)%oHy i2Ԕ`$ʴ΂jI,# RHh+ ;%E,WjHqǘ!eh^S̑d%i-bTqtE +cP eN,|pR[e %w qzXLox>ͪHKZiFmb/ OǰzV_ahdx{A3GZcn)+>Lt=\k(\ZLn.?jcZ䜕M3x:<)tEe)/":4\F:|^O WSu&Z`omS;~H?g8%MY-J 4EFZ) j'K+a=-VH"3ykrd̺Q)`UX9l@fte؜wm 6SO޵$׿ḃ TQF&A@ @?%)QCҒT_]۽K0dY]]Sv07/WޭVwwF?ř3¿t|?v7jfkTacoq*5qYqHsZyD5Bljr7[mk>o1V뼸n/;֩faz.DkŒAI״hawzCblvlTxN>7ck~sK;6cu<vl%~y;:;7A^nu `֋L>λPC}Y>?7p|:6y|=Йc7?S[z ym [Ll{0׻6~x]~Q˾S}T~}~ W/o]5Bl?_[+>lo_6lz_vI nuo^_]5 /ӲH_ei;6my|Qګ !kC74:hHjzM~bW@l'v9i6<Ғz;L)9w3[2nZwO[&ڎ9anԓKA}T?4: -}nЉB:9-amTj_T47ڛaspM‹Z:tcHAPQCc>yUKz!}v5E 4 =zXm҂AJFCA*95/ןuH*6@2RF6r5'/k$8B~Y.]p4c?TmJQ)b1opX7_͇x J3 1`K2s A[{Q! blszs k hOyrpl$h; /N邠d.VIP":(!f+I'Ĥ]`QjӃ'H4\ ^bc-Ia@8PB+R~*~^3SϨlG 2zdj=%8].BE)6~tz("9[j1dU)s-A83S#)N&X8xlpD7}X)cyr.pz=J<zXU6X+UjPhj52lusb9 k9  ARO7fG.*;Hb+Ej'F~)>v{u{\JI L;AD@lT>|P[&.*e'$.ڀS_@OM)9^q `DsG -r<*JOCee{͚( _'C(-0ppۇeH 6n,]CoNF\DNӨ:"wӎ&;x`Y%Cf%?h{B;C?JDCX$PЄJLSx< 6$D7Q8^J:a78 .ǝW=(|' G/{,6fk:?.8*QN[|ޤrzZ.6=0}Y6zfvbu_,9g r JN"u k${J} \|#=Qʏ?\?rY}X~dsqͫք꜓x?IUdńJwqwLVEE*]{>51x\5ػV$~6[C4q8rp\~؂vM @(&4JxnBRXS}o(NpeCb'N-&0S.w^@m/cX+4z-Dw wQw b\*fk[޺9**یHt7hz tC7U paR3]#H,bkmu# j@3%̈I_LF߼ :[K{Eq 訔6I]w"xD} "8x`fz~QWdfqskLf>=_Kg%7]b4  P?gF=0ݦ[ 8*/>d }]=xʹQ +k͹ C-M'z/F n 8E~Vwnp3ԹⳞxʠ %p®uhwq~G-& u]2-I5o`Zp ss2[2Y>xŭ2FmlPV!u0"[bk#EiVS%*U+5$RHA (0y6]p<;GmPT#Lֳ('F{ОRu[n]px2g O,%p!HA\NG2 ýxdj%^ΣbZc\戁 AR f=>V Tٻu6رGOȲrtENI 1lthN(NA}p4c%*Fńac jkdTF0q -j֜ȆC4k}Xޣ.lutZ@ZPAIXRy9]p3BNJ~'f~p¹lJ`ǹSnpc¹,\z'sO R^OɊB~R%. zT1Q '1.8ZyQa&%0]kw3_jVVE|ByY{Y8j%70ÜqqQKJG($sWlBtY"NvݽXMtӸxO-ٷSk((b!m[ð.ֽ/AW4V?e}F3_YQwA9!*ΚaJv~9=mz"vqpܞE''ks/p7!$2ɦ ]*8/1* Bx:%t$ &k>y.e aKOK;sS &%:yG/xpg){!'4/SY`kƚg]r:p(\ j"&%-R$OOɻ3h8պ9;‰1PcX,CDk ۏrO FnFŞ6ilZw;rp䠒7 D"_I Q$ʦ% }˵~1̬}.Ϝsv$./f֚BLZcG.~`]V=s>[3pO ;5#:YU sY D  <> u\ Q2e\:i{mԎ)FXqmhqI+,Um.ٳlR3٩M\3yI@׊g*5@ʢ${D,NlK$ @\v5^gPa&1XM0sbu!)8$0t31sl+V)i};e#8v7[ظ#Ԫ )VuOWFp [0NS ڜ{BǿZ%^fg8oWbֿR(t0n$E!{r^Fs\a3 KGU9p^ğY[ZjgiN 8udS2lx;S%Qv,FQՒn1j vsIӀ$7#U:0&[A;XO7%sCL1$FYiU]r 2:a߶Ns=7@i,uN8 ;qACyۜ{2BP_ v fzy92~w |SG8Jڅpe/l,Onc2}\WGV]Z<\HZE<@GDk4 llOh`D=,-.mI(g-70Ey8WV)RMJ[Znor &Z [hQ7VhDM o6h,1# +2%=,xi:2Q ~jԮ>([5; \"MA &mВՆ8 +E,h0c4Aߚ4fGuBYm(O@ʡ$&Y>'ѷ.r4bՋ`Wa jDPF<X8QESڞ'.Sm1NZy bɾ{6.وq6:TZPqGV7V|wv4s(JJ4Vw,Y,Nw2 [vlx}ƗqFr/XbbFʘ( m#+,]8wOZfSwC]F^;6ҋ"JsJ6/S,%G1XKX *dD1_be-6|[4. "+3<cNwh!J~RB0z]gX:g(&n(6R݆)Д[gc/u'HG0ˀGwtCW=PLl@ht/v֖q6DmhZL7s:@KqA*˪V<3|E3=J.w >Lqݴd&DQR" ٶjv0>s9&m>&T1ӖM=xLXqhkM>SʑJ7Dlp-v=w]b΄|£goLIݦt^ FImOV'bsD7gz.[V41w[B{dO*%2 mCڛs$h*{|"P"[.N"7O7cC㬟Fc7M2~C_*M,V56J$QȤ$ 51! !\OWQz^~Hsxκ) 1Zp^gL_S_g~mtqh`4cӐ",0#8 #QzqhuE k> f% 'VAwu+u+fhJѣkƆuf0$ [TVhFx཰&3D10*RʚQZQkD55c2wY3&!&3FF[k 4!B! @ _qHbTY3VTK3V$$M̫%s~xjNfhj=S;N\֚0j0EO{jGUlW!xQ;beMJ4-yp}i`S\N^}x4'~ ZxuE` Yi#xd)_ x]B1S{^rz(e^^f(TZ< n AEKXx#/WGdN}3 <`C)ޯh'>| ԽC@rFƞp%4> $Mg'$E:6:3缼=ߍ~:HǨYÔt7+]a#O<,a>>FC 0 rj\U\YMs;UlH-[4`{ޱ/'BHEN%`7aVTéBpisl^UdVWBwrY@F{ep+iIOH3ۈ)f:J\l%FMT'6eyl$WgtņE=t<}f/xƅ&,J̪KfUJ¨$%~a_fy& Q H&m<ph0!x[xE*? q%-4\1]l櫺aDi|LR-* 1CRa0w9{:zۗq`! F&"ɬ1<9(Pp"bCdc!"֦T'UF@+&CmiݼxݮBP8,(,O GRY`jeBĊƄ#,wG:}<2TZU6T Hmz [ 1qV w\&eK 3]ĈF`,U J\k#Pp `*X+N=,s[mp.>Xns&nØ0nqi a7%N(xcيɑv l$SI 80:!%hpgb~]V3B{ ~=( nt}~~K[Bx]!|̄f3|R{.\cu&Y?\yj)z:끹 LͲc@F-mbf鉯&D}v`4ƒH+0z(%] g90~/~uvj{6;ø{vR@p/SNnG^&/8{c~q;<8*xP? ƣ!|m\+q?lb_ߌ[}ݢn[) %ᛇ[t}1;1IidԳ6Y3c}c19EMhg6 _iȰG`{)$|ϊ{YRr`(USE%b4,RZ).'2a4̠@ 65TB@{.rDeL,W&rAm pekEg+%!F\mnE.Ke5$׌Y4OS-Mg( iloFOҦYI=AFSmrȥT!#7#;0K af*X.B:PR\NG ֡Hݙe|d$,MgZ1+UPSsLIi @ځvI `!-pMwUjL'S12chF*Y^KT+ZZ @!GSr4%MfI/ڋGP.{IɴOO~Rh=d}'=4+rŏ2^Q!KT dzgt3mņ'JI(ȼf9u圡.a}BβZPW_Q$ȯs6|c}m'N rMt͡zp%k@+)c<#y:w"%uQa>2Ӕz"d9CgYFx1/)gWB1+9#:E% Lʑ\zJ) =i,Ŧ@M3IhC횵2%1u&c\,ч"\WqF$]fݙݙ4pDK%iPmf XK%i]`wJ4cp4=Ѳg*QCʇLH`E؝?LW\:g$StZi3$ 咆2,L{B\sMՔuGgIV\=9 %i֐1uRv9h4pADWAS.v>pMk)UL')U}iȝt3MS)uEAf)C"3e)UTrZs2mhYUr'=a9Bt04wZpH:\Y"z#YڐU-߹L VkJbڝi롕[Ұe瘥A4x'B) mH-#]Z pUgݧ;UH;E~8[e8=Č% ) $1-`[?a ”Tl{\rhI-I4ݡJI/ H-{lWuH1eCԀI 7]xۉ\BqRUJ~[: ?:GVU 0cЙ_h5%+}nV/ `F-QE-:'ZjkLSN-ŭWȮpA =1F;,[pҝzt9vVǁO?<ەO-mMԃ+&U )c7A}Q Ds2MXXR2)?߹oSnNi+Ld sr^RMege QI>Nˇɻ&0S'ˣ*Xc#CNTca)uA"{/WY\ P^]i]Qv3YV~+n-gr[Moa}=E}zqӻw-Ks#U8;iϦ_ϣ {ocI'zA'x% Pp:F0Obφd:<.Fxt|wWŠGh6SUֈjuK"<]$5t,x%*͗-b ho6l_v2xd/+w7߹McLA/8s%\p6/0['O8mCL~xJ g[?O9L\aQnPx׵apɮ ppznTJٚ 7 a@0cy;wkЍ2dK`*&_nZkm{q 8XvԑT 9r3Jh(w;-ԁ-pR}w:0zM˛ (LYux ACR"3 "mO71-ekz`X=F\<\vA48DFbz8Ż7 Js)8 v>F7g֏ϰ<ԋ+8% C!hv>i.c'}tuP$ab3[?x2ܟ-~wC{v:d4+—'>[F7~xIQGU-#@+|1Q :x>EYw5Ak*a[E1@01>_1&UQ)bWXk/UtJX {(?tj62 fT. @#\UwVȆl'Md6K٪n&\UjW :\]7#w{c@: JAO \qDͽZ&`䣏 zfUOJ -I1-l雒`I=iu:T$ӼttdqB76Aª}u %U= 79^3Hݣ<~:=f'aR(]ߍy(qoa Zj\1z;8Qfϴ|:_,ɈVΉ70P #D'|) ZadVIַ=O5ĎҤm |jvoݓRg4 <.P[l+1:Q/U"Svӽ[n~F%{|Q|0ū娿kg lZS:DN nn1* 5SW?VOKt IMF?>!f8= QUdr_!ޫ72ru_or ?cdkP4>N|lq~uF_PB>;5߂Cp٭Ezt߳OY}::ѡt3/\d ܐJb>Ga$ݯIN\6cuMKv +_DŘIC?5FUqK؞rՄMjMbk?=/nSF 񳸀HyxyҒ:IϣsZ6,ðQ"DYCEs=Kabm܃){ yg"3 Y`VO#=֘=G"hTdA%k0gI}}Ф @5YQ9`{gwN6N j ըg}зZ3' T4PB vȿ_C}]@')M3V4Bn*!|4͢;t`g&Xl:?nfJ!/%b;amڳ&`P^k_I~K3?ǍtTuY&F4UP0B9  Y|J ՠU -#A wۻV>Y5A¿bg71zozӿ~bP~r/L!gW]7pk3[)J8-f餈wB5+Jn'\ξ^߬ͥ~fme(b-8 v և5Ң»YuޗA풴l-Ǘ،ΪH/RR3c_NyR&0e ºT()5;@zs4is4Bhs'y| g}#]﯆9Zg!ƿ5ҕ(W|48~˓x_}Nc]'㩍]V7va :{.L dt>f1 QIwv}7|LL ;씸WKF/yy(1~Zb z7V-˘_gx.0KN/R8؏3_'l".f¿`Vha`;;` }Xj7E:^**]m7+qmp.d .N>:gMGFR$͌+=bOY%`!bX*>U|k>a:qsWo"`j'hW__sIDЎ> AFC%zlskMK 1ݹi׍0q],%v.XbKb],YbJb5L(M[a>Xg~]b6#5a}tMp7̸LW^ 3#՚!nC:p?̰5Foo8|05cHOSs4!`rZ$ ~ FxT;vY/U&eצ*jzZ-z,-/:x|:5k#pAgs-$"LS1]ەK;gjggZdv)afUT,eG.؉a1yJ8]Yۢ2n-%V8.{lWLxRi8- PgA1Bj̻W8iX^{Ys♖6;dHzݴUVJdT ?eL_;f' W{cHZJUt- EM  :^˛leEN"\^뼅ݖr4m*?O.immBImsTnTNW.Hӭm- X# >mNj9:hu=ce/fԭp/G.7w^VlU+ކŦ|ojT&cWVZv1\a5R Wa;4夿IA 69aw'S@ya-Ӛ s&`bI\Po3K=r-(u]PA73/nul'n;CԇٮUv]kR=~ tw9̉W>C1g[2\[@E.~ŕ~uZNj Dm* >ƢRxb3qb3^Λ/. mf:bS '_ߺ,T=JBP:J%g&Vu;v#N=ܱ\R{&Z"V-,DRKYET]frL]"SɑEŢvk5Gv*؏mN C+nkũnGiUiV.1ZSMj%Odk6b%=LJdx8层Ŀ]/g[7Jxm+nCT|Ipixbtm ;W!1W )%Xܠ=*Xz1@໑}TSH$P<<%X,|m#":bRрAF 5d%dLGr!s")LMIdJ]qH,jgxN*G;dMYmZn%R_⦕BJ^*TH9BRcsvR-U(!LWvDfu3BE4#S&8Rr6ciͻröhrk>{_3g5pʏD>'E)pP){wpX&,oIԒ2zY\;1x=+]bHkq惹:@踃)`=cPhf_ҩ ZIjޕD?h뷱SCi'5n؏}:BcEN#3 eP)FH,pIװ!@zH&5m&_>o<ocxS|t"4UvIYi8%[N)^3A؀y$ $̪ 5Q'Ш'5rysReū3f R#Y 7 :gJ30FU'5r#R[61`ժ^eȽhDv,>q[9uFaow}Δ]c|j!qL%F#a 5XxQvfRhTE:yG|>`VN\q@in]` L "uen(A 4ݤ]Y#06s?ʐ79^/-g^akAR{0! =<%%F˒Լ#iG}~zJSMbz<!Z*B2cB2yWJYhU#J};eX dMRYkPˀ$(2`e?e<ǓIo&c(I1p@>P#~h8 9N6(YYjm5 6O_lqݫ?w?n-> m8;$>`|_cB)JCvZtxjy^Jx{Ab]kYY/TH1 a*P0vxTX:󨃪Ƚ̭20##*Aز-|B4S(s=ZcİLQe~Yj~YRn,g$1!x`HI ¨XqJmmRL=&=N:w{K{LZ.nۜxN2BH(FVix>ܢ{Kw)^Ҽ+WU]2]\RDX͛x2^y8dLYIn6[-W5j1LOY>1fHC}*E3)0 1Ny `\#C~BdI}1nz|ez5b8 a/vUsTaW&bBK'fnUB(2ŠJl%aqJl dܫ>T& V{ن"pI&nO.z@ΊP$ =FCz# *W71r\[q&)p8.{_ 10]FCr * J>_́a-\kMȐ2%lesI+&AM#"ULFR1 e_gU $=DYL<؇2}JizW-5={o{&wû2V;p|y̮{pGfw4,83{|>]^~j-_m?ˮB|+y9ܮ^ݻ"kah]|r3ILc`gpkb #bSlϯ5{J &:LW?.|Xe(AF/Z>C9O3?og?Gin6]4,Q;A; IA#< a3ٌ[1 Mrva/od(a~/,C\쓕Fwf$6zZA*{V~趜Mk٪GW\NJYX PaQ4W0fp"R!_eZa60$g.(N%֞gG:_)0~pWaq=z? ~݊0UӷdRo¨v3l5ra4h?uW8n#Ow^.Hk{g=e˲-Kֈb6";SbU*V_/v.^ov(t_X?÷~[ǧU_oۺxϵ["ڬugz[uWx{7S?m?8R__ٶ}ij76ip?eآ8k\FVse,ڣ\o\nk ◿m[z V᲍Rn]W(uنXUwàEƗ+Ute.ƱOx8_@_vOU??(>mo?Z슗9S[nO85Xm3Q[*G ⾦5[;_w0S5q}zgBf(Dܧ擼I SDL1oƬIhD|3f9']wR uV]̩|@.͸/]*eF >-wO:pC<- c7-"ty>N1.[pB[CļW)tdATvd<$ڐ]|{UuNuPdFa >O WfD)DKcĒ8 -uwx9{$c0SBܧ_Ȥk](H;lgBk4"\;RKs^CM\}Әsڰ\P':sIY%n!MYGo֚SF bf`ZdĮ}z2ՍzQ[G{kE;aĨ=H":p{2⿟hw%lX\ߟO#%?1JſjQ_Al6ὬUqS@l .>/&HoIwET,/~a e"ZnwjQIM}B2Lt Oxe86Φ1vPF!U3;/mƐ !)Wt'}VTIMzI "W]xH@2" 9pWIe BX xi>Rc>8҇\ԧ=j gɄ 6~N㰜jS%#}02劬B A!cn5}܀P]9/Qn{t")r=!e ZڠJָV:z߮oJyKy@c,u}Cz(K*)&6tN +ek:x3B>Vgc΋uaa0 C]EynÀ D٢Gllm \B!ۀܠv=E-F9eYa JE}pLҒ (\U0aw`CkC\NJvBsԝȮ?G.霒fS07 )0%F#r6u UZ;67@cE>w(A6]<xJύ l؜ :zҕ<%n`E7la^/XAI=<0FYm@ԊGU;)Lt~b*fa2VQ&eM\0F;")r'tKoz??#'LǿVV1Rp+.e5r:\brX[UiLH _NJV } GΖ0!rDbP'MkEmԄ1ŗ^|w2F`G Ƙaaq1! '̣%,p0")uYEL=ƵJ'_#+)V ,>튢B^0\5cy "7攐'RsKL2%V0] IګD>V\d .Rz;= )gHP2B7 #bv*\p_mmNJ\~lZ @H=Ȉ|gB^~~㫲hD`hsRzx4c1c BJ (]'*M k}5tOA}ɏ'Bq/o<#N2l[=7sH](|F-JI]#W !+/KR;` Кt1Fzް# i+ zx䋳GA`9VA`o 5|z,rd=<26aX+OB8⽢W,ˏZ/8JɪfXYX/CRO +H髞 }Ji3M'$t!d@1Ilb2\\t0NV8Q6OOͷ:J9[AOP>aUGKw~iJ#3tB~0aoݷm84THBK uh-pXtXRȷ5N p}^y/B?_, m={>x Q?)Lvz%Pi#>+E0DڣFR0'L'L 'B>V| ^ jY @PFhGYCFhF>RֆLp8rՈUHӰKhluqԌoۇ|zWĀb=<23L"'$ŇwCCaD07L\xE%rXO ! a9eX-;wFGZ20Xh pb@W7g"+ 0)e,BGU>yJ de3!A#˘ɨc铝؄KvPAuc7TN 2G3M:9غAaO&x:@ݣЅApc9A8W C B0;z~PtB _/%a $!fs~(ʺGo`wC Pv&*btubh2+FZ*K$Y_r1ӳbXt_odAjH{xdH7߫@zx.|¶4w! 3$?[±4AA a(9I> |Dt\j;q<(YyD 'Q%~ ã 2hA#ENY:M r¯%5PO#=ղ 8FӺR#J+qӂ'4CL "ue&ǷњQF bFi̢QS\TTps,2Q(meY5 2DgX$hx(2WMm c7h¯(zshTI&ihMF4f٣D (fHҷ:N1N/nse3ڜ a׽'P^q>r}J=9jD",*[i˫XS4 hD iU(#3p6 N68ʕZڼ{B"tQT^(Sinr`Vh垟w]y~WoXRULwPUZ=uRi]uL U,$6B銇;/m5z4>jQ1T4Ш,ʨX;Zg^Vx"yETߔTńY䰠a'N" moam'An6AĽсYW9ՠNyKOX t!,J&&4;vanf_v; ^p X"D "64w8 2n3qoa=OWbr&wW#nӘïJ|+87hM+{hTh\[uq',=JQ!@_{|uAvܕ] !9Xe$|P+|fScM} bv1v44s?(n0]ͼ~aDݗ ;#Jt}g&ST؁̚wJCz R6&0}>4"%Ȱzx"-em~\yF=_4fBL !<-k(%ӘD52S蕗CJ2Q]b= 2b3qP&)+)V FIWTBXѿG GFS?!#1ö8!4HY ʽg\Mҩ@I4wT}K!RebfwYEf  @W|-W]M0{sH*WҖJf6qR{g|R=}!jw7 IU6'so'\ 䡗 s,3Y4k2抠.]BA:Rt+Vע.q0/T8\NE!:(]ȧ-JUTN70q9da#E.hNÏۃw]=ه!G4>o[r$N7<|[F`vǕ#ś3;y=ƀv[Jҝ}Ul+)"EFG.WQUTR" %ͬ$F2Μ$M sٌohoUd`őDc|56kAo5Jt=A$jt:2ӺL %_(5'ǭtN3y($޹x MB']a[hWg|)N)rU 3&E.$0ڲ߮m#N(`t0XϺ#}A~zChJ}ܥP0K8"p:)X Y &C:IԵKCoF1uW"4InR|:JO>8?DDFZ9cB:CV DB}eRp#:+w`2`g[j{ 믣dI)ͯ|_Z}Z̻F*:Ժ_OhSa ;=!Lx]SW@JK!-;;ŗTлkPDc%0V>e{7ΗoG[;QC]blUoF5`_X4ǗVb5 Fw[)t"Fշ~[.]d4bN/Ş3F?~n9";09K@׆,gD+߸N'%2 > |NtԨx$-Ò+8?LCOc GX+Z*%W\5&fI HiZ+Ny"^/j[?V>g-T~duz>z\8[\U77Ɠ&0~> Rg"ODʻ7xK~ɛg[' !J8׵x5,-aKosyߪiP.] &_+e~-7nrN^ B›`0q:Wo|򅦶_Y LHΗ?~Ӫ}ٙKp&'x8[zqyG{^w~._y0N͠X Yќ$iwr_SkA\t;I|YQ-cdr$"sPS&% %j<3)wKYLgY ?‹/j&3xϊon tr2Td^==N'8{l^b4Rn<.flF׿&v:dvײcv?)~Q_=nfp5gOͳjXX^}%Lf/~6/=\zCr[̫ܽWϫWZe5y*~m~?~T6ڲO.oo}U+nT"ٷ:&u/CC>ÿ>lU8xyx В`kxX8(NMM_\j>lQ`y|Rp/0[Ѹۄ(K>T[5V|TkЏ~,NѲ(HU쌚#PYMwJRNwJgAber{[8wU3Pzуq4!T~2KǥҊ(´0iF21BKTN3aHV4EwX=,F"39n'Ӥ.BRGG[ƯB:==.{Eʘyl֏ۮW#۫Xx-+fӅ,bĪ_̇A1dnbPWl>dkȤP&.ݺp P1sh1yGVNcG@X2,LZLja緼Jj.~zXdVYE Lـ5jPA]xN(V1OS-9zz`A 0v1*F_ $&\Y;$͉Qʥq,S%]/*gϥx hG?V,NOwdȖϓ t3HvʇFI."LZaednUF;?-8saQe'|_>όq](@Ul9ZΜ23G(%iL[HbJp&׆FX49̉R` gV%LCT҆F`$Ϝ}n?&%ī3!n(r֨_ǙXNm]{[td_}Édj ) 8sL җ;%ɝ$w$gp U%hI&@LЙe0ڰ\=8rn2uO ;CHqKYF`j[Wv)/ )3_[,O-h> )Hk`'ȫ}q`N~Ou0osHla9c3>`98ggՑ=Jyxt+MlƝ>̱s0VnR0=ݍyE m`.t._DSNuT(P ̑>߅u:uy&[=XH`j[6怠tY^E+o!av0! g2 w|S_ݷAi^RF՞-MfڑVA>tj>-qM/UK9ghGeݵckaYLk%wDX-43'mvԁ9(MF`]:r,>8'*ټ;L,)J\RfhJ" )hN&DZd@{r\wGuj9 TU摡OoAK]J(N&TIBāQMZhe(^D S;)`}PMy+f!o(wy;]*u6UNWd8Ml3zH`z;uTH;uf<}HQDUrO6Utڣ%IEA]LE&MjxPlF!OZ\VVr=iQQOZ~3 ҩ!vB#0szH~Ǜ,ÔF`+*3檨}R5Pv]$fϣM%?U=VڑJNKN_%d;8,:Ȧ %͓JSdh֖ (GKeK#0szNNtׂ* 4l%Χ +ّ׷p"!CBF`9'&5ZVb8yB 1R~B#,sxgJƤ **!HҦamźU!bPc=̖K%&f7p,KIeN3`ɉ t3[ES5|[ڰ\4~ ~G)‰q9m a={ h8u`pڔ' Ea٤(6-5MxQwP*1n9}e.MY=4'Y^wǬcT}qkeN a,KRL8ͥ-4i9}t:%#*7jKF`כ u8+@{*ѽЛTzT(`lo0Mr[rM`c\YFz)KehB$vqݘ ]T5K[hfP[EQlUtT0av_/.Ta&cPb<@j뢞GsC,>ˢXy2})Fj9AȰvݸytIT.e1R G]kż=:3 b3LiMsV:G CT{5y0 $jDe),<5‡hbL>V,IPD 6! c:[.zv H~V} -)*U4mzĨ=1mFyeϓP[Ĩ`x$Vj{b{rCi7+GMܽD;b:qov.E"dΓՉ_(bﴂF;-*l_U 0wrCɭA{T$tCw q'( lF0Y_ hcFZ;iۤzm0i,#q,WIrI$O|aE ,U*p0!JfS<*cLxRDEte}a0RT_oJZn!\t{pOHvlq!SmdQG")5R97,9@`I |DuQ2봪iUN"tNE\-o" կtzդ쨻9~ڝ P)Ie潓/Zka !QD'Ƌ׃$ %T1fGux^k}z 70.d]4D%k@|Ma1 ޵#E0-~fw3/ |u,yg1[G۔r8N7E"*nbGDSVk^. +)\+n5[bq;t&*JbCJuP 0 ¨x'h././r grYOhŕb8]rV=q0hXB7)m9roc ?*GCe 0"~_>}_94o%[WzJN`MVOP#,4N1jToшoL?g5jI,<ШC'C?MYTʧ$(mxz7;XDmFy2C_c+RUXrEbЮSܗ`NCInlͻD"-e|y0Jx[~㗅z]o:*yZLq FaI NF=mT.Q㋸Yz?frv\U'I~KNSZV>hkD!>ޤ"#?'w IWbR RXj* IQ=zO՘.|,5>ճ0Joaɟp:wwD ~Bo8*{.zWB|^tu5\tO~6634s}FqC6mlli̟MfL uJS*<^ubEyKg~u+p}UC &=f\et5o[S*SNړh 1O& FIl4g ɼӊslZ.H B;Jw>qЬh?^eaXV|yߌ3"u5+#1CzStRAiRSBnΰ*P:{ȺY=(̞~f5%0-EV\~/aJQ5d}1wك޿ß5Sxr훫.G]>N`3ES;B F4&$ )o\dqW8<0S7TSPkq'XVɸH9e~~F%1Z`7Ȯfon_FW"\6'zkK O;|EWPoGq3αSƻ2k 0X: `Bc;6JKNݫ9_cEUA?c[V`Kա!+{pofJ//Lo[Fl3CՌZuxFTIJXBR_ `JnԸ2$~}7G4kA)}2;9wyoH,"q@KqH!Ԫ] 3$?2/p9k9LD1,ry̧Y]/,R \b$ L?*}%sÕSP pu Z8)KsHK`ųҋQ̠Ŝ¹+X9Uxb-t])?ݢU{~幂$_5d>3u(;ذ"!$)$GaP4p9%w4RS0 1er+.M~wȷp58%քx]%sQq ygLX<GT:.O?5Ͼ{m\ ]'g7'Th6P/ܹ[F7n=^x F~#?[ǯAyqR~dOo^f JȝAONiz߉?c\M{j-HLK33@ikKiO2s_߅~=]| PƄp!S2D U㉃Hk_POT1jxWjl7~r4v׌*a4Z,J-ݿ_r1 "(/C'`6Ep^SX9~A[Ĝu7~BYň(L K6*4&ˡr1)ǏS!BJlDP2ܤ&&*%He$AK9CM]8ePZ9ŝ6c }vqd[X6qvɔ5⠫#ENr!>tMrzym_mwt[àvgTQ(X5t; oU8(pyj rNwS 䳿z; ne9Roq㊁{7{alBp;C zɩq=/95.95.95.95.95.95.}ɩqɩqY*.95:-[˩3z? +O͎`)mOovg?}:.&\RH`qk SX`su^ѷ14Y;: }m89 R` c/QrGsdB8P8|?3K4XbTXeF"GDh\]<68qpKQ3w%R2(6% 2a% Fଝ w:e) 囑:G!()c:*%POvo~9x1t_4EX2R7j ~.Ua3̂!q-:TDP.lʹ028tpoo}:g%FBܴѝ/lb-ѱߌq[c-p@CrdVakrst֡YKҡ~u,"j` ^AB,:%cq+>$e?ܩe@ng\hIxxj]6jof%-tvLd oIByfӷ頜N@6wc ;Pa91"JaPD(ٸ&c$5,3A ѢCdA5tK2()רhNL Z-` I䣐u cva5T1G+o Yn߬e\8ңL$A95xЀLEߩ'AT}{b¥(…RO-V@G{õ~]u08o|zǧ;oJU9ZG"[TM: 7>E>/Oq$aa5" Av¡mc"MY-+N-U,#tݥ+,0,tԼug!rYhz&2l:_Edʄ,+E0,HACWoEY[J ]$M > AKRPqc%({5DZG7U^_iOJ:8,<% YzA4iįD-Cʪ[5oclaNg ̓Se=SNFە GN2w^TCepp?A ʲEwѼ eW28Me u!l TY*yv`4gD֥ם[Ѯe~خf(> DncqsFp|EˈDQB܂Ҙ"F$^Jo _4pQ%U.B!3Yeޮ0Mw݉~MuXi!-|6bnD&D #mVCepy,rp:݊ ]c9[Έ6;jE>'tj0;2 7}q UZMepչOFyc)}TI.$.@4#2zzml#᳗Wk[ajÎOO:~TN H=0lX8(3Y(p, cEu )6\(Qm>153P 808Μ6;v(C(B  6A³y'ZPx'V$*fݤ=c_h: m/D%5էW/WBByS3,yEhx XP騷+tdh)7\`1ؒ$ ρH:<@~4SLOT) ϕJw O!J&@*w(?V &\=`b|nf͙BGKQ*zAXĎt2hMPdi=ї?y֔HG0a( ,v>@D4!QNA." 'S000R猴3 ٗ / 4^^KL(m%;y>X,Jl1VrnGrjrSk,o޻~a)y+RI\c64/ݛ=&`W/4-[4F)&_bx7Ökp4봬r|o @(Rb?:2Y,?ӆ8U PLh8xZ=Yg:Y}w-mzS殀"IMOѤ{s}&BlIGh~gIJѦ%ڎ59} gg]^벓mL"Zyc$VA}~l,bIȢ8MD?; mhDٕ!*Mӓ{:ۘB|`5g1ږje'èc#h2;/W1hٺUlUu4qu9;$߁<}v^=?})&_'/?W ښ! lUxU󾪆US:TQޥ^g[a^Fj==sWIʎ2]]%V~V0xC / 0SYHDU|P#0u`t6q29d=^}!PR#:F֯l^6^ \3)2̩цFXd 0tRzFX>dDYK9 0ڑH`CShM:eaVB ~L{ȟ|hEì2CWee*ue\cW=Q52~b2z1ܰȑUֽo{+3=f!gpӂٙ?8k~uĺ4,cNǠ݁s"xG`Q)];RU 2.lm:ȪfHzi^!p1(ǜ1,P'BXpb8IpH i $B޶ n'gt(Ώ ⛏z9_Am>ghmwxqHc[5z;u*-6Vm: ;呩(}Nx<:Pקs_43^7Ԥr`W*݀ٽgw#1j 11H`dF,׷~rorJc nD 9AJ,aHAMEZ=u!X8c5cq GX@/YH6)mI8֢v.Y 2vك%E vL B#b.<8̣3o8zh[/'P1kKM "  Mf1$<$s V{kZsiw;b|T3Pەpl9ݗXlpnFoPݞC\00̲|1hXI(r*8˭^yBf@u;gHާotJp7͉xvM-SiB<>RHqºe+bW KȊ'Ś/¦۝ v͂:5]_@2Cn[ؙ^ [zW/{y_wWmt~w]:v= 8 q8-IrShY";+jz&/T4WLc6hO}f{zMLPY{E^\n>]{ -+T 4˯Je=RR{uy9 s2gӸ[ımtK R "~u?'!J(<b9G0k K}M^u'Fo_fIB|%qImٚ:z5;&Wk$QZ$ȗ|x^;7Qj¿vJIx>mt+>5ECWKӺn$"a?} {Ƚ\%%}4c`] IX!8'c>$v0K%% {B4ZU؊f{ñe}N&`}V֠K}YJb5oj nkP.=h & D"hC_*+xـO_o9a˷,NF ^ Ŝ 8|A5˽yL;qQL0ŽACŻ,Qdq^%1kGaxp) ">lP B/l#sɅva j"_Z._٣N7G{EW򳕤,)g#?f~VgWuzMݘ3bbA o &ѶEF(h7GaѤ7[20;Y?OlTW2ćQc>Mխkw638G2CŐ!g`@m@\7^j6Mĝ@C*gO)&O%l~4_uo"9TF,!P}~*#qfYC6ԙ0Ȭ[^ͪ &iWPC ;z0fQz})ZH]g4fq7&% -)Plgvb)񚫮ocq;40k>,3r[(r|ybc4j$MCUǰ?O&GÚ$hxtU?Llpg33j\y(I`PwD`!)a"du$x%.V5?QNqF67iee0%f>?Þ!GMo˱.p(Ahcޛ 'QM.9 @ atu rr& W\f>*uђ#2- eUADhD ^`a%!0B rT!L 2qLSg,9PK@O:`‚3#CƿvP0`0cLY ¼b_\[YHoaKIK8 2eKJq&9pfKRa)TTS0!Hǣ)2h8%QÈŁ2m>zb GaAA:i+`( " [`HA 1":DOc<;QLc:Ә43Lc:Ә43{qq6oFs+>>qͯLgN~򍠇eO>/#>'^99ap1$x;OY]WP+u%Dו]Wzx@qu$= -ͽ.gƌ犋qEѨ_j_jF`x}j_-:5]?pjTѾ*\gl,Ji % f R. 8cQAdI ~%H)M{Iiڒ5SsM$3:uv{\aXP & `(),a[@  H ٷ:Ԛu}n6}EkHJEorL=/x0@)4Hywz#X&ij@:46 ~>9 kA0dp\#csXIFb9g%!S@ݝ>p1<bѾ>.*Ո:j "go~H 1>^=h_ȗ^W0c5/wzMYϊw7AxTAJbod}CG` 蘂 hbjz>`x>g*-KfU0oa z=! XII!uAn#{Z,:ToVywh?TF|mUUfAE:^K~*51`pd.`xGyxt Mѧ=U[uyy]IhLQT$ec8޾dKZ 'Q6_\,J Jg~VSE&wtV ifnYMSY^ u~<)Ӥ#d|y)hqI$mA׺d[m}փd~9FbɬX.o '>[;/]Ňo%Pp|$ZxAIKAx]1qV2)p+Cj#N6,K^?FZW;u,i |b׊Sݵw4䙺lX-Nԑ&H, z1,D+t2OK'[.=iSM_\LTh:zӉ GfC.h߭>;+F}߭: c|+m^~IhwP:(F1mu:0U)~nD!}NR]oUkC4\I: !$:ƴž0C$S(Rѩ\f?0U2}NuEʟI~2gI9(n:qv$[8bH ԓB4&q]R%xݦ"~%KD;v/"< i1y.\ef m=,ԧIBT QuFT#PkP16-S74Cw< ŭBX! Pa+ P2'(0jb }>@A\@u> ]3IV((yg) am'ءPp`׃⫃\cf0tV$*ܜOC z򸸫{R.w-W1$Sq[ DہZz2T$2D`"Ld 01.%2t>9&2 0&2D`"Ld 0&2D`"Ld 0&2D`_B '2th%&2D`"Ld =D]"Ld&&2D -A0*T"Lh hQuq::!l?rFERa(/T) NFx!S5h9wܜ7;P6z?1)/%tt :AHK([G I$wr& HX8J]@9ӢPFQU7jpt YܽEO¸K7 ,d<FH S*2-h%3XKL v:IpٻƎW}uK~"18 ·΢z$KJc{TKRDMiyUOwU~Tc7e{Z> JJ$ϮZ,;''NX c.&˥Syl7Hwq챿x޶Ӧ~E/`}oW.D^R$;xvSmAKy57 ''U#t9*P/@}} OB ƅu) F͒,3N8 wRBkkȧb Rsp&0/a{!,P< _)D^iv>W]_O']6f_~G<,?;#-Aݎv&g~~G\>vϳ.E"񎯗Xe`j 7\-E|l>'|s篷Ve`"(; QKTX}ة\{[T]#bT¦;[5G7qZfeXj#6Yv]g)>?~|=~'.υҲ9U{vƔ7?F|yK/GF{Ci 6,w{@N,mr7P%.|2XpUq>CwO{}.Ü_')l{'ϸkW{ܞFneL#wr{r=ppUY^MX&nF"c 0FQ [ XPme>yQ[Pq{([|R2?936zԈhc80| S1'6FO dK'R( zeaEݺzhMk.#;Kbp43+^qey垹7ܥYi?/[.8@Xf:('W_haA!~ikwU-U΅}쮤Fk.+=m=1&|Vm? =-|jz|٪.Q׽Q q>~]軚9q XQ.mq&6gF@*-S݉u!r9p nߌӛј\Ow0NQz49(S CqkEG[ID1I9ՍfGsJx8jG\ `gx[;0f3ƿw)CT}0PߖPfx;qeVwrGŭUxܨ?wr' Wߎ-v>Y.\]MO8?,&ҹ|n1cz;_7ZS( 䫺 QF(צ^l32B3ZID?J྾JnRl4t.i(wMy#ðREJ\g xpbgWr2cZ5.>o]9LDr yKΫlZ!ys)ЂDqܸD7[YoJY8#7C%{4]%F'Rt}&-k{roRB*?,x 3a8PB9bϢlh4~5Z7} M@ꡀx qV -[&MkQ1 b#mI+4Y6_# x THhިOh([rcktW^JqVD -+\ØiMxLd@"Es{ -tio$p6F0F_!x!scRST,9V%d'BP!eͤCFB VցJ Ж˸q K Q|sJu2RkF/9)hbq~&FtىFB eo9 $83QeEBA!Z盗ZO•eL<`X" .C%/kJ(WDSɐBRl+$]G^NRʑň،n_#xWr@Ҳg4d:h` }9 5:4,H]Zm4k0$1ZW=xcp]Y{EF)bHhQhz 9!AɆ`R[ >$ T"}2UqSI3:)]c)#_# x+[3<5=rWHhk|uY!Q)iƘf(,JPti`z<<^ƕ7 t[9h T N v,Jcq:Sv9 0^]:otԭk  *yYIA c -c,`7g2A-&"6IDa MsH4k|JX9Nf -[Za$UulK3ڂ΄ sjj >$gD+01<%! -k rȬC#yENc#0:l? -Ub3Q 筜\6_# xcZa%qLTAcZi_!x!iwqq<Ts\ ƹt`j_# xkZN0-!FjTZK2 "TTpv|j>{ND #Pv@X#x9x% Fs!%L @, Mk)q,k'.G& vx>r 2If&h@6pXFB gcE5"?cE_RIdK3Hhޚ˽⨥e&Js ɩ|摖QŰJ%rR0d57uHtRZOPwe9d͈h]50) -X6eK`\c2o hiH]WHh-gMhQ$Ʋi uTS*JӖ V3ƥCuk$2â^R`4OMś]*4of,}0^,n| tQ@@mB?h:!yolo=I1;MXx]y _֞qѥ;6Gݻ4}˩'vo,vbz.͉ozP sZfj|B;RJR4ޔ [w:AZgg[!;`_CZj)MO!Y+6CH젖 qTUXiDZQ WNW޾ }PG4́崮SjJ.m H9|ŧF_Ungg"^Vey21Ŏ 6<6J"pp2~EO_{\*ɧ%ݩhSV%Nv }RR` s,:Xq_9r~? ` l Q;HRfƔ%cqI%HI Oh/z~~D|Ʒ  "c2E(PcY*efNX@F U%] 8Kϔ>Vy@u i< HAZ?_}6/`t?9*`7Kv~nNoC+N͠BY%/b* 9K3IVD.]J$;>p2&ƻdoI@D ~&vrc-߼P|oj߬Tjzc^`Wf$垇R{1 `Egv5^oeq 6HxmZmIM8w}[T%=!#R*5IʩEF,,IԲ R:>U>0C>8QH'76gx! 8X]nE͊F٘6k0ͽ ]/JKƁڝ,w鯾&q<;_Qa /;-Nk5eZEZypj9@HFIo \ZBe$L`!%o:~@mͬW*o1k rp-&hiP, E>;*dYIsGRZbc(NH8^8ˈD+&Ey`7!fN"_ ϝ皚j  /N\1$m`$Hb:ۀ!5]x:C8,M1Ffc)oK#PG3k@ź- smGgYY.-EW3חIޕq$e/fƃWvb" BɈTb%gġ)q(0 qꮮuu?gfoJ>#`oh֤R޶L tSȣD}x{ʞ,Ғ"(>8W*"vH&W*;VH)xu>KA)cpaםE'u^] Te9 $RaUs1BLϛ믁o=wV9% 3cq:ۡWᖅQKFP"Ű=M.FK~,9-DM]@r])6Z|GQ>~ 0Xy-i\ޞLgajr^pV4FJg0_{}>/yK Fl0. ~39R&)]ԑ!dT{%Ea9I`X'`bZŇ9EoswTZWPh>Iɾs$N? /W㐊'q++_ ﹙"*^Ki닁tr}<Ps]P4U槏dŬ(^QV)$cޠ&$^|*}8N0Qg<>;y`\&p&hu~٦-p@7EMIS6 Q ]dkt.%5_܅/T"E΃TxM(]nzhM-> o>rI)66KQ]1npN1MEV3V n0.y1}x%v(5`: NyRh3ջa69硎Q`hI`B!`؈xU:i@GWPCq\Xջ%C@|im }N F87c` U09sH[G,1bePmƓʁr[>I3]:'&&V0K!zƌƁZ0cĜE Nurh|Lԇ֯IK&7!@kI5L%JŶE V Z].s@b8(+2ʽ 2dRWu9_KU:s}~ AYeDž1nB{M8π=*_|Ds)WnADgl# Í@2}E:#{3旯<1YGH(򹖜NjAfxky֙BMf1`mSwtvLK:[f=A4Y,E|$ 87 y ݁ȉ*0#[cI"|[ePO@EPM JqTQܦ<:2,"pGövX+B"I P71Tmwv:KmTa˭q;A; duD, ~ U>OI{7#dux{`;H {əܢ;iscBu4D{XNq,z̝v]]Dl haں?]"(j-,q!LX %J8GXZAH-z ZuP 9c޽M֍k1۝L~A-h)%BhwFwowMtܧجk+X'a(NN0L*O>Uwg,LuW^^\MΫيHI1ko%ovH wW+o~*2?}$ //f1\UnJ5] .F0I~%yW٧7^xur:;2)Nq;uMr&M4"[4韏݁4o4Лm@&{]1|.]Jj< _EN3=R  ﯳ Jp5y6MmSF @Z$Q)|8A R1m(lcbc+5p+C+n7Immj>5&TO,-1F$l/2g?o@~2ih 5(·Y[4M!:\DO<-&)hj]oYӚ55 h\*ShTej==&g _䬹Ewm';_0h)n[.-m=3?Jf/hl"zϰ> -@K^qClVtd}֢ѽQ"v0r؞W/<H+BYpUxnZYG YM!)#bj:Qh]Rb@}C訰VX`oile2+BQvE !Ie:s #52(B;`E N< n8ʮ|UNYG~]?lV{2Q(x4D1p&,S;4!R ϸZ6"4!H;SL6bO yO# H` t鬠9rpymڦ Gώ 8@p*Xb:Q25 D4VVO/#z`ұl+tE:YU@P@dg.yȈ"22?~{埐2Ppf.D vvGO|Z&6LEeSְ1 _m ?1| 2DL$mt[{, ֱ>VVgy;M-Ou_RݡcP߽)S 1B. ^`^¾6 (BIJoGx|HO#'^ cxIKzB~:kXK/+pP%Vi3ABvx0 'ˤhdП5=)pƏr[E`VBROju5/5,RǏ!r6(06l՜&>OY//|Log4bZ ej\hk΋|YRBnFu?30aj服p&Lux~ޢ9KKӷה+dbS9r(MTZ֥Myk8-rT%VIb ^Y 5Ov$h;5Xy,c f^`x\ לHPH(0u M,v: meVGu_S^ ~3AIjVxtX(FLg6HeK^siٷtʨYYߪԼZXÁwЬqA k>78evE`?̿Kwg g `A^)0)bkwoG\2p+R^O5"u!s9Qƕ bɞQBA+ΎO;11ލVE͂Bs2P ?aNVB}0hu 3 ?Pxay-́ 9QlaWn! =ɲh䉉b;d>s^Ӄ|gFzvA.:9)Nrs̍2Q5aCurq| U:M;`\!"jHԜ`AH^c$Ah jmA$M7djd@xMYS/A跩0}Lv{n7L׌2s<`₞O԰) ͱx ćR[@o'hu>9ml#ە)#L}Vy+ꄉ@D$32VR(p"٭K:mN=s '/;-ҾP)mE#Thb~XhK.QhPf;Dގ:X#hl5piP1069O:ΰ  @a-tA5A M`sB.-lެR i-Pv@[f6!GddS‡k+pⰫpfwH-{תHYoK2_ど(i}Fnj.$dh)Z?-^f?(~-SAx7'(˂#YjL SfTxm"{,)fHQZbəDGp灅)@J9FUHy@8k0@ePdpm9QЉ:-K;ج m.,y:4s;i~>*cϜ&Af<Sh%R $`vb Z9'`wGF%zfix~?9`ʠ/gnx9-67opPC/dbH3q RO#x1HV!^YxA l ًib}6n ח;'2w`E'_)*6(UrR0`R]'y<[B j(#y-qا]k/c> >آ-?F]nfyq_#{=DT6Ӂ 80t#Ka>M;YޅT^ TX쪱&ĔVJNJ LSdW%}:;">#Nkw   `p{ F^*@Sjy0F=BZjD>q_Geph,Id(UۼPy2ЇnT#YQ@ZiT6F!nodS}ngM/vJ} .oVM /)g"~==Mӻ1AKwtN{Qē]ICU y2ZĎll=d$`*GZx!2_41H~:]|'F bxulx!uՒnh z IC\U{i+7ic9Hy H8hwznIs_n\6g3 NhRƃNV&/wѦĴG_xD9n3@-iDSlNJRݾ00emnB/| lw *AيjAKLR7O[c${PdowF48{N=m %]3Tn!i-n1ƽcĴ+)щGQ"b3%K8 f!%CR`0U \b?yC:Qr̸٧dr+-}[&i= mNT|1j>C'"4, zl^Hּ#HyvčkdNBȤTP9Lp%AA;yٷ0xT#ުc~ЗZw=7zi2ǫ&їiNc-ɟOLb)?n\+0̹2S: %ʑ/׏3E/֣9FFþH` '[ MwݡrSRr#$.Xl=3v -f͊|^FȌi@ DB)k f#ߟ2o%-QkNr5;e%y?gzoFw^T"A[ " ^RCQG"XKBGVfF161%c1Ll7>.8rOl7sC>@5z:XYͮݶAN+ .&c}mo2 ɐ$hq]2nVqbâ;:]cW{nF@bF3 J#}NHH;e#a߭>3|*\W+Q뤠jc]%QH5 \uW~Uh)-bkYI5!>~Bw7{Mʃ>s_GWoh2?}6sxpwkJ{Qn<- {Mw [Vvn+-v7N>M_&M7ro T>vUVE"~y<ŭXF V~cC:<]?iA _׮s:Nur2unn]k̠sz$J&kk2"C-T^+UkĢՠ[6xbi:_)X{FQ70`XFpm3>>f![>=}ZTcTt]E\nAC~gZ{8_/b3bWp,B?%lbVOq(J"[fTW(ἵ7:zzJge6|u\ߺM۔ekʧIyPeiotu>m4IVVՕ?P2kQo-4jWpbA[ rXg̲~ <=g@Mi3_[bPQܕA_2ei0B2-3s:^>.eu 4}mauU{+D<o5$!bEBdVEC(\G4SI!I bγ8nײTq#"Vzi84 kZ G2X`Ld =PWu;"E RrHd 3"H?wWp0 AP$@Zp~=V'ĉa]ؼg 6˟`E c}DyD(E:D8tU\9U^Y7tl! L- 8B=*rrFF@0@%xF=2Sq˳ +HMG7Ô:QďAyV ,8p`KƽG &VeɽpνiOh5KHkutm\a\)A-ߩ`v`;3֨xr1Zs2m)fN4}oK(Tx JL+d {-ESH; t1!L9F'RP;1Q6el( - `& I.ARa]w1BLϻ`޺_}ŧs`ZuTHUzm?Mv꺳Q2_J)B) Չ۪49 r͆6 9q GaTaS4t1)_'nƗglMg|֙+ۏdn%QWV3X~O0ll-]55Cimfq|?'4{wӉ^uhtM6D4Q2S9,.U}3 i|cm +]7ۈj)Tz]MgY݇k^zB7Ci}_)`ëJHwம՟{Ԇ XLO{} Bl~>_~psLཌ+ISܹ ڥ:py[MC{Ӧb[4Q'|viCnh+ؤǃȧdf|P"p_t'>Y|Ц7Ghq.@xWXq%J# JiCAY" qctY4@Q3V m.e%>ߏ~<@s=ѨlT8 P- >x4ϱ ZG1 2R(|(p*p:uFa|smt}ECkkbmy0Z;~鑹v1 A",t%:s3W;s3W;s3W;s3W;#} -.ڙڙڙڙڙڙY߳.F:s3W;s3W;s3W;s3!.shjgv^ڙڙڙڙڙڙڙڙڙڙKJ#ծ Ƙcxx݋-5i/9v֖R2|%gS4agms<;eb ftk#^(͐0 cLfp_ptjVey Љ@RtJ(tF 2EA|O Mn.^ vgm?rndyn.ү"ƕ`qp{/4Lo8 b-Ftw7hn #z!#BXYL^jʈh FрGc"ҭWxvumA.0rxZnmK7tE[YkEÃmϾ-,M,Yydcbm2E4^JJ8%Rb%0liCndU@;-ߎHNP*тhQP7aPۆo;7Y.+%sdf̂y8\Ѭ6U♅b#_ D%Ʊ҇:#mܵwpmNtڶ% ݯQ.wLxqNML}y3:s7LV垻P3.n9G:]m-b̺TK<7LVQU; 턷1Č&ˤEǞ2r봊Gva::諞<\m@y5(j-ޖ.NZ}ryxk^A|Ç`L~ߓ8{s0 ^t\|*W~}yP֑3NVcSNO-~,񤨇/8%9BDrvEPa܊ 7"`Ѡ:\RjQyݻSqXtmd;10 .fdrH8+ʮ*yƶ3l`k.PHx3jg>Lh/RFXpST33W+(6{6IVy6y.KϏYO"/r>CA\* Cܛq7 A12}of/ܙ~@k'oa2 t:p?q e,Tg:o#bs7:%W]ŸT@NJm*CA IqZ=Nl ݦR!rJcEhq@cK^M$T4NsJCM+k'43~#b7$II*u^}epx@x1ɾ$b>z QIIJv &Q,zn՟{/ sP֛h':}ugҝ*GrL͟p)y 7n~Q68O1)ިi?$+AV3S9V9vmh'cwP9<سfpf("o^]ù~O(&(~?՘hi1Z.F8~)&775a)aʮmy`u5ف)s{xkE/?7j:R :Ѣ2]aLf%Ͷfm[;lL s/ܽQizMy^_p%7pڈ(EA-D~Ze:_f"z6F+2AB{lt }@^|r Cu"ց^/Qpwc3竫o_0qC; fLy'<ٌ*x=ݳ pXGCˁ,z̝vX-"ro0CJf/hl'[I5/P7,/\|9րO'F>7f/C~X8(т-4P TG$CRPK,q @2Hyx$&*~{ٝ`Zr>慩~i)Ɯ<VZ4rKs3n۽}{;y-NI`(3qgт^#*$;AA!_Ǽ> Ʊr.W}=޹WjH)eX WʶgC`a]2bB%sOM4(d]auQ~VFUxo ']ϗraavQRӃ,GLRU-+o/Gjqx{jp: x$ )[X1c2b=6:5[!-gEgFiKVd>K1 -,^vGE`.0 KZʩR ˽):n<^vbxxw=6-]fg0xvU>J:4.'/V弽Kd>ZPG Vp/}ԖU` ĵ"kp̯tifmvt7rUɟ(M7>‘TȠ >OP* %^F@HHLBDN60}EY1 )fe  D9g-!|`#6HӊiK,cHƴ ‚(QHLd$&A:łKsg H\"!c YҲeIۧh ۃP8 wqqy&:|b-S$oݲ_-dz0, ; LH+BY^])<܀Yl6loz1"P}i8;kvxRNU. D)D1iu>@&:* I`,Z!7JOV87 #$)ċ6^Ԗ$|ERcib ֝-l Q]z]wMX\wԓwem$I~`K0^wOb܃y#Rb"k}#!HJ,TlKUYyDFd~rT2Vc%Z ڛ( S]WM '$TVy.܁UPM&v!'I]FD4h HmulCTKy)T%mN@ 2n&Ҡ8bA ؍ߞ#Q\`ȝ QJ%pDEi嗹h+,3%wc!mN~pה+:d/r2)dt]h/{"#DȘ*Z Ȕg)DHX{Ć8&f6 PƩZ'f<+ R*gq )GP4%ȗJl_XoEZns}Z"ӆ8d9; 98DNT Pʑ' '@N4]Nr0#OkCzEpTJn;rOx= ,49]/pWƱt={'LqTZR=b, @ޠ.$ YP,F|џ"ww<4iY0 5ѷwph1@#,b2XV$P#rB;ԻO6q ֒ P"hfp"KԿt> Yp#'C:tF$f.|V̫ko?tp;'eJ~?Lg_M[M*9C J<7znq&? ><ϥƋ Jψ<(VV NE2OrzVqF *#vvM.p&G&S\?8\eIjԤ .%Dyw70|(//>dX~F_\XgjG6v"R(M[]_ ~=OsR?*pYZkBH }*M6:XΈn['. sU<Ҙ҃xg_KS?7/~^.Ѻi4h휶 Fj>oFZҺ%!WڌaEfyBa|UO9;ͻx!dpfp6lkm$LFhd$7,}~uS* q5?t#BTVo_5m3;dZR]@jOHOUdЄ,{2fPr꿪!A^_?~ˇ~z2p Bw?w%lk緿=iU^47b)r5iw9v~]|{b>LfH;>(Xf.3ȷ$B^-ݣ{5 P•$ULR8#?;aGe))@BHGS FxJvaQA 8핤64KQcAu*p36.yʹ*Z{V=r_x|IQ`ֲ^nz9JZ$BBC"r;RL(qc(%jO8KNPXr7^@ ~snfʳy8,\n[][)ZXbGٔV7fP-.ғ^.G.KK  ͺ:?`BB66 7vb[N=hK66gwݶ5.t]~v8WcZqN9wK'rYbqOyk3A.sc|}r<҆ok[gܥuj!Vv[8l*2q{w%fRI햘Ǒ_SŃy[YNtNzXUSC|3\ (EM oGl/@.鹬 Kr'8t=}{ǮʅCO?)JD1=7N{yxwm}=Ȗncqɶh0bށu 5ɬ>(ek )/]r$ WM`\ᕐ@sG/$ q)`, QAB]& Zq؅Jm(q<*dzi VrF#9m2;[D =M9![Mb'ZYZ2hE_MHMdڤw|JOw:csͱGGBGFj)O%$y^'#ϢxDkQ0eH#P HTS"Op,ywRFy),}R?4}m *FD׀UYƈ!vzYG& Jh$PrVTY.+3&:V2j͘ y@\,r9qF)SF5w =6[ ǫz퐼W)B5;NYgPtܨVZLzg^5xjXB*,1SK.'{p=)-ZdOy+%%lB!U[\6jU'a<#^o>OtOC0;*+m$GEonN"y ҁ{m;!aoD-M7Σ 21RHwxpV[P # MJg3V!)r:B]_>R 'hJkwG$PH Rzwh Ml|IBA5[B,Dy&Z [4ScN3IuGETuU6M"Pe;zd1u9IpI> (XL T7X.F)WDs'S$+:vB:6>q*{ۺ1 ի qk_ޑ0N'!e .>#o77߾vq*iq8c:xh\m_n2z75/zd by-v4RSL1.5.'ZZ7x"WN4- nuԏ"g%1[^tUxiQ =/B~/T P*B_ \ђ+Z PѮ/T PY.[+BEӊM+B_ ~= ~/T P*)_ M)B ~/T~/܁_ >[|Ȟ۰ |%Jhɢ4iE9bm $MTԩ+`?Gn8q0Za);Q$qd5k0ݱ=Tp`\U\Ĵ @BZø)&Τ%\x)tD2-Bq;aym"Ωz 'ϫūup `z(ATJİJr N|4UFMU9뫻x$6 AJ"B~v@eB@ˌdXGyMSNHlS'3xn4p@&{~)YƇOeQ~ih|iFOyCErY#Tc,r[mMrh%NRCs'do\c^no'-b٫7Έ3j3UCb(c`;L:jXY׎8$iU2T,Q!%HǏnԉWSF%$]W_WqlH}$)ˆH4\ N,Bc24 m@S,bQĘcr^_&~x+.^=nKFcxP阷(@:G--=9%.d6łOo3%#SJAi4 _x|4ZT(`D@.*gujdYY8ߔwd/8_[V x_eE<7s0ƟK A4О |5mgT!SX;rv$3;7~6xsj>%PprԢFgc -)7uf?,Aq8ݧT~?omgq^wgE.Q~kWERe(!U ph 4V(yU&}Z<6coeLM޴xv1⯵_O~?~r< 6tF).杹4_^M疛.9z|rqW_[KJbsK7mͰf4c, o!o g0boﳉ/'[ylmDPq>ƈ3 ߎ|~{sԯccKQC/Duza[<0(ó~LõbvJ pnG7}೛^a2Tt]O?/?矿/~OOç~ {DonQ¿ӁƟP9ĵ\ 5>ZE;Ԧu P΍$dHB1/9#?;0dD)y RXHIBކRk'p6.i EhOF,EC$aMG8D<04BGˡN':8{8=ЙL8Ȟn9)O'j{`Y zNJq◸ҠrSٳ)pE[0`&'AD*ιYRň16!P%40T*i9itF)Cw^C&DZ5  E㈻P;c$:'-*Ji06 1(WPg3ͻM'Vt⚥^Pk bBa8>_HfdN\   ʪJHa`OMa:yv!荃.!LGŜg\&9Pq2Z$1DiD>:ů\8scc I2& O]}O9cJ*SƯBW++)q =ZlmQ{KC|16QGGVQy0 k-F}Qu7+-@ Lue"E*RHxDN8S%&pT)»kQFG`V()5khu:% P)PX3! (qo?=ͪ}Oz|ַϳ|yv4Ӷٱ_ޑTFٕo*]}ӛ.Hи"TUjrQI0y< -]$Sz#ޛ )h A$1F%EBҔX$nz=;`w}0n#wg鄜i\%~:GB+QrJR/ 8*Hrqꔩi: -Ĕ$f5wN2+JeFI$PNBpMݞ AQѯ vMEᛗ4cvCtMyEIF DJXb)rAQ8J !ƨq(oވAӌ6ZC\(^ v8+(vـo}ӥ}oe&N/9\ȏUl҆Ƥ}Xo=Vqel< zD\~Ax? n'ӢE_pl#*sz]9#8Ӓ-}V-y'A=Pm.>~76618~YCZM W=!c(JIM>wb;A%$_z+Wb_:^_[` P@ˣP\늾 T?񷙞9erDTjnѻQU޼?K9LEo&`\CxmQs rXXmؽ*No.fĨ[e鏷֪rE>) f(Ce8iE)6x JEq2bJH GAe%XGьKn@aQF51Ԯs;ɨEG(nے3J!,TQzP۷"G]ͽRDWE<^*pbn^.6:u΍ '$TQϝR8'cD^w+@B|I"@T*0s /a(>(&6 I6>МR ytN`>"ԙɹdDdy+o%B2_`.?8yWmoZ ̕V+`͂+A;sӘ%aĐ0;dstي@v`Z:*1;J'5ū 2'.>hjmH+sDKGQxh/{"#DP!HR ˣYJ"miʅmKZYݬ+Tާ.`J2I|D'}P{B+7.ꪽξ}^tQ*CG̎!fbPr'ni^ͭ\0X4jubGAhR9x g.&`D169f#`3`5`Uq9>r+Y6Ա(t֨0R@7z.-^ml㶸F0|L->̮#,,6BN^}"y *{T뤭ډ$ƒwۛlFJHyŏoh t)`G7A /IOQx5xU p&;ƹT$OvYx|2^%B,trZY*2/w*X*7YRepɣL`&Z3.G/@: |#y>'-b"#f[}fqV.'Rd=0ש256vtyb)bԥBV*G,ޑ<5S[fg|5~?*X݆P1<~KdOaDIE @$mB YPlfwc3]Óm\v.1M<\kóE(0JxkT`3.E. mdCOoW10i-y B 'R 8f(.3XQ0c}fU6o?!Uր.J AR4ċ.SFEEDSɽ!7!jv$迻FmȬa(P9 @dx/M $8P1 P*, ßFz#n >ye: ) tć#&~SP R 1AQ&H^9O8rLTgD]W.?aR37P _AIb|]0$=8u;=+,('VsU]?^?Ί񱸢Hf8JԠ" Ss甇}7Ù_'oٻo8X疧i_jԤ 4]J [/Ϫg AU܊Rq_.B֭ ULjQmQ۬ .•fJd VkMV+ԾvF̗6?xN椣UKk.ĈV7o/ك˂ њdܷ {'5l]vHT _Oo> p}m#i:7t6 k0:2|@0V !Ozw֛Xcd"Fm[+D qvc@C#y`*<C1LEfH_ŰsFJQ-_^ ޸Q %ud9r"6l^ﻩ.UgUkYBL^boP.Bo$_|uǔÛx e)X~d &pm_zeWCx܈5Uθ%+2dR!Ŋ1;bs>.T|M~'einNm&p%k,(ΈA;aGe))@ zæ@p^Sp"ѾEʌOA3IQTD7lrrЉ PS# Ȱstdz-:É{$'V m>2,=OsY/=MҒ{d A\N`R~u6sK*dZPBlꇺ4ri}s:K)C7;~]&PJ>K,QQw4xk"&DHFE+=((9oK2&)cruus?b)״+> (`lȯx"D5Ճmcy0:hho+FwGUc_KZoY:hNz*I!V8Y&",P 0SP,\YL\8=`is458 G1OdΘb^'o4ѯp6]l4a#OI\|:ͦ#WZvlNdg"6k`*}T(JBuĞKo8WCt }ɑ/OC\X*NbNOƟtQo!JN4K I Ĉc&‹VW<3oj".w,OYE Z*n} # IiMй7(!j %ՉKgrtLull:k:f5(w/'>j:;fozv uw6\9qVy.su]^?>N)fuezuNmnYu-KlE^U5z^jrC0k^燃^dǜf"kf2&sl4]7nMyc.}ߓ4~(?yC{>cO39p>l/f 颚"1y+:ԎKә1[{RLs7dU *އnYr{ڝUy[8xÌ-uXU_Up}Uؼ٨mQ=o&`\3Cxm]rIZ+xluiC\0u)9*DlT5V>/p&L${ܗRאp}=x_NL=9=8J?AVxd(t~zrp Sj"-){AQ?֐Z^^o:Û?GHn3U331ކTVJo&o|U S]X Vn r (dT-)UĝCVɒ3Lp)ycU07ui󨓹hOYsb=^ExB.`As@ QwK }|Z4@ @~2 hcJɡU$\`tӱc[x %@@!Mv%K`bFQ Xj5YҵD/xAz7e#Kpqk7o, M3%Y`?A.j؃X0"FpZX00l" wU&PB@t").Oz!X1MH `f9v-A*RIQRp ho(ݭE a |Z*A`ȔZL/m0Duyv;Wv1gS5-i*LFP7KL3JG+40z:6t FwೣɠMŲƬT-G(QZthXƤ`/+>5L@CaFhQ"}̽7mڔRA57}$t# BkvzhPX ]gT*dE2. AS'+Rnk1րz m6 + EJ*UJ\ \Dޠ]\>\SL Rep#5J<90UY:@Ց"[`qOK6"\1uTlMZiV=P(-g/ULd D 9k듵OH:E'Wj:?o)T`jL1 JH؝4V 6R VxX@+EXAIP E=ʨs!x~)Q2`p rOUS{(0q˾ %%.nr5' !¼xoP%RR0):&f5ФXpIBq>ubE WrD$TiAM,?u-+zG[V}O(7B_M7Ҧ ;6)\kJ_o5ZP5dt ,Nds2&P`J/_W{{{W;*ҝ3pqu}'(эV >"}LNf*F#)#F&jN;T!Y*V%6hյA2pѨ7Çubt0KӃܳDzwJ;]<^" Q Q\w;aL1Nw]vf^AŶ:,_wϣO}if?}Y4-ƫQ(5O?te772nFI7s |~/΀-0#6 \}='ut1s>GS|zP+ |\.7G|ì9Bgj♃w|0͸kDtQ*aAleTmGӫjY5e;96tOs~7L+O9~|bV# n=x\Bl}\&F\γYl1pL.xWط-d5vǖCJ.?3*(݈׮kr5|ؿ\Of(? :'On 7c9TRHN^oL-:jKWZVt1D9e& dncBn`#'5Φu-7no'3ǎ ^}zՓ0Ŷ[ohY7ӛc ofu^3+К瘽9fV+1+Gg ]@a ~" &'JDn2%zzz ^h{*..REj9۰M{M֨,7,:u|q ff =ЏWaܧ~mY؝Gֽ򒏿EQIuoD6.^]{{}ݻR߹K6}::z4׺=s^z\˫8ߵeWχI0(#|׷?wX0cf~jW.}C!tETwo]YBof7Y͢o}E,f7Y͢o}E,f7Y͢o}E,f7Y͢o}E,f7Y͢o}E,f7Y͢o}-YD߶?$~;o ތ֒x.,~oO&x2 ڇ6@]Fq3㔺ЦoK w{7E|/z`G Y؀=}kk3 XYV0ҨCDZDPn<,:76[k2p0s6Oؾ^рUˀl_א)\<{baI݋]%o 'zmK*q AZf>`=e=*V3DgQQ̪HR$* 2i#W) 夐$ 11ĐM-C q~0Q( }*0uܐMkntzYns^W _e$i%VkaIJ>F2a-`R޼Vi`Ā҆tdpܾ7{X(a "oe݊U-ep >$%B ,A |ѥmp*\CP\iEҊ<@!ԃUa+32Ip3ꑁ[^)"X=/C3'ÔE`cd*znRƳX'5,8p`KƽG &V+4Cͯ}мp^_ar$Ytg)De;OQ:9Ҹ'i?xS&`C_Sΐ dE\eI8N͜~ޙzƳRr0z5p^Q>8)<8)0!L9DG)_Swd&pa7G)LGa(F؀kB%xR*#B\wSAa8Mc,>#/u<.M=%m},trr<<^T`r*MjA96^8Ɵ0*ac;{Mb֡ߚ;?7/HI\ ?YӷT/v)H?_Mk0;oZ%Wt lF5+Y,@i} z0bfѓEt)rou]v+Lh$f:d~}4vIb/MǠ_QʱҥQCODu~|?qrxx /fUHrmHh4h2k}ZF+_ &O?}O>|D}z_?};2 LQZ,oA{Mhjڛ5Mۢiru۴KsrG.Ija܅E>|Tb5raZ6X#i$|G` 'J# JiCY" qctYu>8cIzhlvjkϐb:Fђ8JjM4HYpQbE=DaU,dYԙwa8&[e6@oo=ذsKΌ;/2V઒%Jēt]w pquWId&+UR[1EU:Dɲh_F֝E/=8?g@?|wv;ڜ`] br c`*WyݏY-6s 4׾ao 咁zNgAxͷcx>}v $aK~1-_ÍYG!`7}k 5m KmAZ{BZL+*OʉMU f]K" bJK*1 ti} _/ Gu$E eTVq7:Z:CJVRŢ\TN:F(M>8JH74!ֺ5%7ݻô xz0zۛ3]4,N>䜃Aȹ+;zD$L}q6|}qr:+>򼟒A[tE$?d:}.ͫVMћԛ7 q7RalPjՂκ%JTRmk^jx>*mF\E1bO1xEC;xϊU♕b-9։z,6LCz[$SIܰo7 8^L6t-t .Yf;!2=ndn(}{G'O,Sz53Qx"cy{SҙٟMgrܿnUll >H-ulWGmE0ŁʠNySVʤ&--2s%PiX'!rڞ?=^ox%jPԤ8fIl81B੡tx9q6:Cz YA| O6Y\oxG2pܝac¦d;Bʾz]RjAv/Έ^5v4s2sm6*Fec"fCe#·fNX~БBL`9GG֧ԙ $ o>}pmԣ =搿TwUåh|<` jvWg_~7i5zZig;vR.#0%d &Zj{wK9N5zw`wߴ` .;B2?0i6O{4-zumpNw7-tpoAn^'_Kց^SI{i8k`fw{mxfz5N^qԋ0]2G=x.xtO??=F?-rju=O\(J!\0J`D$+h/W|r Er&i\Ea!FDEpfVS-X|X<̉ux>\Yu>9/ݻ7/xeEE p!@9Z*m (-S3(&PoEy(olK6uci󭵿`s<YoK8fO0#?$73H/}9Ø&hUqmZҕFV\`MՇ/ Aa}plP8/Y;x ؇-Ո `@*ã VPl$< ,8OeňsL(QmGs tdXD$ m}Z)$17F&f3tuk> Hv8'=ƭYT)՟/'`OЙOu511L=Yl |0 ׫ͥYhEǸN8̀By LqYjU$#VUP:Czy@Ti`k\J6\ 3f4j|f^1Tk%i=9G+ {NI@G}2XG$CRP+XƉ*"*#+0`,9sXr@08Xϙ j\g15#R.ŠHjv,/ ɓTgq|>!;!&k`6nmYC7:_Y{6e!֠3N & V$U1xC0c5KA`!;y-Xhm p3h`) ߘC&ggY@ǣ4v$,Xz.&ʟ%ųܓgS4gy5ͯơ1HY T9R*%'b:J#*,&h\UʷPnJ?]T@Y= 4{cl/NX>>e4a9Ãkۼvfx{gO$AHRVa S띱Vc&yeLQl"@sj4BZ"4_4q~8Yk68wa8jA$IH$eYrąB TD VV0]V$r1$5AZX,@)"£op_>brroDl%f 㚪+qfb5f'/w8\]SI ?U`LNkvVdx2)g"gP GkQډ b>!}Ԗx9ύ8¿S^2͏ә^Y'CsI͎vr3qo Tf0}?W@hd aD%y S%^FHHLBDNsʾ" \HcpGe9AQN% =rɃ4& +:tL+5,pF9/IN Y$53бsѴiEӞR^=Rl89{ٻ6$W%}t ,nB[&),/ERdQ,eUyDq"jVu//&Fc[h$9"'S9S)e9790Tq]m4xv0+*l<&kz_#1 D<3ƻLxniˍiBPM%A5AwVU.6A&I,0Qa 0n2WCJb #5ڲCPaib ઝ "g(L ݎ{Xq4媪H{Nt!,fݴODiqȉFQ!zy0a 9 ZXxƭl"HX,H؄ A2 ƨU1kQ:TZkYżR9[> ԇSXS#CH,ýY)@v:+h@.rDcGN /PxMEhB:LWH@ÆKeNi\1L4TG`ud4&jci^TWF=W/gc dB "$ Q˅)8jE>(30IزnTp%7ih 5d(OPe>}Xؒ{:{cJ Jzf5"- "t(zMhe䝩$c) EvimdzxUHyl}srQC3ڗC*c&@* q 7GG6xnE-~5W[ֺMց7KwoZ x`!YTSI0ӌIY)J&&È 3>ii~l&YIdv͗  >v`޾h={%eq6N:ԞzE~(. By/345u֠c8C ]<ˮXX]Yl<)a88p $@BZEBdVEr{Kp!jLa('$A69y2@kmWvɴ,ha}s9H+I,q!i$pA f0e"֒E&2xr q:R")9IE$s2Ja; > 0B' ^/vr$ל7\sDffs f8 &YW h Gr ʉsfe7nus ׊<@!î,W!gdHTBg#?aji4E<%&w"cd*znu{IVVa8g%#D+ NSD8:QY: ݦSVG/[q&fN(ݾ3]t)m7ܞ}5j_VLL'eo[7a&oCprIzA_ -MnNv}m=R$C( @HPRL݅>\ .BWgŻJ`*ط,6rq 1='?uUaHyVP凂o|^\%i{7/Bݠ(KmU kA9f]aI9Ws59_[K}*J&œkM"0[AK;Ӌdl!Q~x,vCʖ_]Y3EHi330~.`(]>Lz4os霢#h}NZWh$<|斁VmV ҅Q@Q^-_[x;IiI.e E-?cLms*tUJcNo:`㯟/^ u>\}=y[E ZM¿!@#OhW47m*EUbiVv]f;EL47\/; qqY^-e&=̕$*\'(A ?; e)+0 )f1XE Xp$iTd9Zs$+FKO EsDL %VNv\KJ.uFa|?K>NgR+ʇ{}v^GygݖqUdMPz&S|o̴`mnnd96l i.9U߯6TP0vY4aP u 㥙3<-a˔tV׺~ g[Fqi7nu&uf7T]W:aJx5kImoCIzSjtd_JZJgP:*MJ7Pg]?yP2YZv>y^xs兲E %ׯaf>omog-2GܓO7T ݹg;x5,9趖22rLțq'qgk!H)c!%(Q0.m +;Ƙcx8 4x%,PE>Ӓ)E 0 uVPG О ^gp%;"{ĮdKR Ri*R6|n_W!%3k)cQ.2ah$iә^~]~=Wbn+j\7@ykh͞]X'y.a IUp .PHx3jg>Ox-RFXpSj>]q|Ozc؄P{֏ϋt$VBfz-Se1l[ ;ԣej?~qeAe_PAVURcg6(Y8` )n,x Zvܥ/ݳ):L)!ruA j`.FzK7lP8)ZF 6G-l ES&9ߚiPY\p[Lڳtkj~ > fM1?na,X I C ~JQT )63>m &g/&ۇYr#.*.SҼvid&J0BKLsf2B/VE*$o+9 ͎g][VNL18ÉdlƋ ar{y*w$TbY(WFy%n!O]]Ђo]!+i)rZsqJ` R}g rkdORI8s3PBTkme2qL[ T-xn1U|gT ¡IPD1V7XM% ʅCZŕ.!e8V-#&NKQv:s/)XJh@%jٗ]6V޸Dl鵛\Ɏkx RM{Нzجqyt[붵7Bצ֡1 xK2h|t`&x푇cl`{𣳥qTK\~  _ " #h//T>id' )Zf9)I7IyM1ZlF MEqY'Yf^*[ƗlB o煠OIL4w>Ͼ,*|STv!!od#F1ξn_qw"OG9a)uE 8_CoݻC!]¤PtvEw8 6PKJ(b1 92q,w'sJ¹_~ ]]/=9#OϢmD ύY=CU~obxejBLO\?T.5cwrtr[ Z M"\ȐϽi֘QzASt5ĥC,\b+FnP9sb`ՙH)9g(Q{w*q6[8伯׸8CM'kby,yK\,_(tXp~o`]4|4\;9E'RldUW*2 Tz :}@.9r%=+h4㉑g+zrq\S{2<&{S[ro--kryn/͙QV%wqR9qJ\CJ:WH<.! ə'@GL]ƨj'.fԴ(\/yr,!RQ;C:&u=)!E%rWv]&we"mwWnޢ* rW)쌻-nWU!ֆǻQJO{wj@)=orW8~zYwu?i rWtwE{wGoWpˇۥ*4?^t;<<řX9_|Q;=P響rM@;^n/e 1gtp驽[n^ rQh T{l/?Ợ)`kn :h^9 d͵&agQ2ךϵsk}H1RgEqWFicvw5JyDbގv ]qW&ǝd>6<]R}mOG(,_~_*约a'΋畬Vߙ]+4T1CjqLR uSwng~s3]z-BM+zANJ-@_Ec0ӥ;|ϋUW{8w-?|"DLŇ} 8'=,XV An:-WtV~:KGRAӷ[F zAiFq$ ZAw~O`;]nS uآ0Q f=>ܥ 94]Ɂd4g -1Ι-bE'4LO#kaOEUEZp d#Ns =8whd$?=9:J_56H |j4JZJ2ElرzCK` R}Mj+Irk7T vp,f-js$4Tkme2qlI T}w 5j2}Cj'Ae$umjf`.~ICr'Jcsh搲T̝+T H3Sw٢J5K  h|]c139vӁk=5sCr>AݠI{-YcmE~0E:54%oep[+g+P0y::'Hj LbAX7%9ŠlS?Φ'C̐Beƞ{6jT*DZ:†R첒AN&H%\IץBSN]B{-zÙiQ3;(mVeRka pn煠~rsf v!!od#F1јvӘaٺE]ܢE] ^ 'Ӝ0ˢcG5ֽ >ȕ3C ήgj[\d*rKpcdlK t?Wx^>}{rLO0f]mz,y'A U3W@^u)*w0OcK)༆rHC a3G{gƺ0R% )bn(o8qL`؝abn :ks%,h4Tݸ{MGWcroHeb;8w\ tvǗmE`ߝ[ movX2xm^iާ>r{noX>[ Z M"\ȐϽi֘QzASt5ĥC,\b+FnP9sb`ՙH)9g(Q{w*q6[8伯׸8Cȍ47 ^q>|Ȓ"{l@y}yrP.REC=Hö ^Çx'^]Nί)M8›69X:lV\=G6ܘ<,0O#gp Y2TheD ]Ւ9@= 1SPmf ̇LJ?>0eXK$ 쳅t" DX"Ԑ1KZjI p#HAьUB^[0nkµ$ϙ0JPI r8L&>eZ;xae>UV{>=YuŸ$]gr#,O'{}Edi<`*%r *5^!0T*$g31U6 d :V$Er;qj1zF d!S{ĹÞ[Qʋ4㡾O >/:|yhYyp󮾳/gut/>0޳6qW%q *ՕB++eY/c͓RS׳x @rU6%{z{Ǐ^zdk~f(,C'ZK(Il K`!#a:0- _7i ~θ<v צ-n5&XUBg WP0 (K}c}9w4 =gvI4B $8X: DH[XA`Ir&$53GcMXMKi-OJ{f᮹m4WZk?ßZ* 1+mҁ'5 "I , ˃>g܀k0;ԁ}tӌ]Ǜn6MƼ]vEBSy2vK,f VQG%>UԒ(\G4SI!Im*RO8W̽iͶ܀}Xo"V .sZ\PY#LsG XĠE&2vJ:cLu;"E RrHd 3"P?w|Jlp0@AP(&>QPz,3CQ_tUEH?W`` c``#e]"""G] .XêEwkE@Ԧbc6x"0(mPRYB"J<-!5KHtQ@T X+Va8gtKƽG N) ȝp;YrQY>%Fѵqỳaogo %`WZ h_e2Y>x9~{O!YUqiIEQt{Q>d d}RQD 䘂tQR J1u3+^uBOջJ``- `&i.Ra!gon]wQ8g Bj0]>3B_ /-6)Zj[_f G9-DQ+rvM6 o@N|LËUV;sZgeb_ Wݯo`| ̕^cTvIo_M`~u#IY;GfY V0b~dG1W)Z?9V:dݨusfD2J' aB#i`ХoF!,SlPA,t#SFTtK盯0G* ?%e2?$5 Q6NX@P)b0}9n|G ?xsÏow/o?^~uqw?9'ؗqp :u h3Fd 04ojho24Ul1jrՂo2.mr5ma\Lj&ܕ/X"AhG*|oICZ+β^]Xp#J# J; c)zW`Sbl2HzloÄq#lT8 0- =x4 S\M4HYPbE=DaU=iPg7Nl?-ePl1w9y5f޺WK_)9_JU\ż(f, `l+6=}pR|uR'&N“M£`m'yi?nV#~ ZJ $8f7(܅1E@ KdTiDg4xտ@2*&A: pAL 2 ލéo[#xr!零5ͱ ,{[>*~UzK(:`QYG ߌoQ|`OY<"=0ijE3j8|o`MAnɑZ :[/AG{+Fg _qkHLrJ͙"ǜ M1ۡN s[}*=8p)"M&x0lSCz Z{1[,3V_cDeF4Btqspq/D Y:RR~6vIL݇^dݯa ..%u[_KK/sFAisWzs;ə>z}]לmYۚBs,R+l[@FR=ge+HN 9*< )+RU^f#Җ!ƐXWV +x \j4001Nٯ$(DEFձ!{ {`p9ni<3<Ϥ?VbvMו޼殛π\ (,NCk]^?Q]L:^}phCbu^^=,gi5}vjBZN2l+=ϵ\K}:fs7w$_w|I`xYK7-%`"{&CS 9S`h/%%H *XZYh64A7IH iYaf!-֟`CVL[3m>3jxп[\B-ay8\[ VKL+`(p`AMWb'u@=H66wz'ِm%^Ta-;&.o9vƋ-i[U>=B 3,g*1Ľ&n< \-U=T]D,F62 /!TQ)MI<̷ &Lx04* DI}m}U+7{yZcgEoC|/.;+$QYv !yP|Rg:瞡unvρ,0E)kcU׻xyj󪣾`)mBT{j4腉Y*Us-c1KI478Ŝ9s˭#:F V#֓O̔B` V%%p)BSϘ8P=h:F"i:ЍL .7ˤGMjZ$X }~\mBU B ݛ7)t U97J`UR9c6([8~AR^Iś+ֲ}m)3k>CDy"'ZGt̥P/uSӜef)-qā z!Q6f8${=Ղk2U@>n\5LՒh7wV8ѡ#8{B^p5ORck/ {3ȐӥOmŌ:+$  }k˵am9A"{K9FŔk°#UFH b83@VS-a]OnóOBqlWmȧ?xqGE ,bK5Qi4*_43Xq,o[ẃ1qo|tZlA/;O7`0 Z\zNt9XsVLu[諞i&>$8mYd6FDI@,<0J"4! WꈔsL(QmGs tdXD$ -J)%RHDc:bcCw81>Ck4nA(O.ּX{\9=U>KUѪQ1PB'tK̉_W??V!}'~ JxrFN }@ /jOH% ǃaPRW&Z0ٝ 7- YV* 35\+s}5|*Pa]!^QU{ګE%} 0࠯Sj:Rjsfȵ1On_#ɼfad;V_n~vI}QkT7$5#(Ȯs1mRS1c9nFuⓛݛ{{G،b&֠ۜU?3:βMգi=TewWLٔRR>qL ǫf;xF'*P\p]N\L7= ftP&M' hDPC}augt&:0j٦0(ɣb=O].yR~&m]asR[:A2S0ʮqd& PRb`_v] q[Ͼ2n`W[FuJY*$wHB+qEMN(2 1(9ȱދ,T` E͋& SI IG&$btd KZP6 2sd"#7ϕ,8Adќ#*-!8)q~J[/׶ګ 5Mh %r9(ݓ (滩 ~E :7ɄE7 ?#]n*0 +S Hꣿ Pj,ܘJ s}jݹH3osA#Ι5 lAA!+Z80sȴN 6y#* o&gb* +yF:H*0ml,{&ohmM(Z*\c7)Z Z}[;aLtָ\&ɊO'E Jυo&;f$"2?CJٺI6B>qѲZW6sWyh{&9U bIMO'F3KSQT*d*|T1J'zZHz`v0p5;,z(aPJ`WeWn=&RYܵ֔ ey,;S)gMqz>eSكͩ9&9AB+L)3 " $ -DQ|*;O } Pg6K:`g11.g2@tƬSaY[%vmF[u~ke2)4YfB;/Dw gZȑ"̗m>!08lfD* .L.8*i` r) >bIԅ5&Uv aQyo*͇%cXmОc&0̆>I$n\fr&he92rO3}߲s/\Mho o.@2Oex }GnԂ=BZVwrNBGܼ@zi2e4Fp]k^/[_+=3jlMe`*a:+b҉,?N&:r^󺭅j^S:jBy l1kUZvu=aq3+ksm&Yjc)a_ԵV,zV5/mTvU^#*RHb6E8J:Uv*heWSnJRd 7Ҙ,lHxTJ{'dT3V֋9jTPe/\"6p!q]<wDBdV{)QN en Fn(8^\.|nL4ʾP'=5nKOc}7̪Ntx*hxxsڪcr屻yx1V -( 0c2 2QWyFz ca{ ?.cAX"x+Ii%Ppym 렳 |XAȤ5H4@Fe26G [& @zkƶFΞ,cI, ܧ9NlE NٴB7W_{SC376ۂ'* QNdj2 e[rqϬ6cR{r&ճdZy4:εs{S[r k0($ύlcIVbʑoEn ވ9.rSx׋6!m-vEgY>eYRM]?ԠqgPv8LȐYQ"Q&Ұ#(Ubj4٭k$UK(5F5"tӈ\Ђ$wY. )D Qs#HF#K)6 mG$Ge;Uד)^*j(;>Ɲ'ɞ)̂a8,J ,l򁄬UBS)ʓmT92@9gQ`p+حy]uRc ˨a `Zi„( )3l妻6˩Fڷ~]ߧI2Fg F6߻U9k gx2B j((,sp`ϐrwd.K01J dc9x$*%G6r$*@؝ <Β.VbM6͕jQ'2RV>cjQqJ͎ 6vjKETw+oW__OɬT"V7J̒aqWzF,~[\TYe8_)ǽؿ<'! )o\o͇w}/^i:i"χx 0jkho1М93ls ָj{V!V/1iT&|oI]"nZ[hngQ]ܼp#I2"j)%Od,e@ys8OMaA3IOmXh?ѿvmȖ##1E$9"Id.%,%}hI(ayTUuZ9ԙP[{/"w|*9>9&xWuKD`L#*H y< `xcv,o,To|5vx⍩φ#t&pZ}MN{4Y% x+Ssg2L5ysۖ}ViV?$LyC7dDG!Y2y -)$SNNnȮ;_cinD\MF2B{%Rʑ* TR={SjWR k{׻mZZ wo%M~n54}6`Mnlw77lJrnop2?c]Yךm9{o;Dmr;mg>a(03$O>p&PTUUI%Y"kt%"I:)mW`3'\;G\h.$# >41k2yLLΝuuZApO{6.]7_IYT]^? A\0^s9wj]Qzu5k{ZdG6;.լ;\dٿmMϫƻ;yv~(q[<ֱ>d3NDnӤν̵V:ķ C ̼$EOWtδ$:%>DYetrsr dcJ`L%p%cP$JRhII&pQ!7e䶀wN[<$IvVy`y`g^X,eKZI%I(,X*FfeD|_fC~D'ث%񌾲Y 4 sP)4t^Y8Rsubӊ%jv+$Գ\jk|6 8X^F޶:AF;X>Y-ҽ,xENSs]~b_hF5fbƴ ?Y}qo?^qG:[7_;{gv7Y.!`žbJ" atO> JOIlQej^;mҜB@'z)fnCŇ!c@y5(j<8Oym4}zҮd씑AD˹EIy1bDrF&\k7sdq L|J6[/*{¦۪^+.1}8z?R`޲X/yNM/uәP"cgݢ `Y@accE d01u }&XAGStR ;Up R&.ZQɂBrnICJc*W+Hn\' "} 1l$Uv6PqJ8U`O@S.@}.Pj. Qڽ1Y m49s!~kQPG sZl= j6J Y;1ҹR/J~T.v瑵9%hIҹ"h^%MK֗GJH}[#EȒAQ6Dׄj8q]$qCܦoWFRwt; P -'? 9Pd_Q%=ӂKCpt\pb A{[jthKHC.EYoxު^rFahvA6k~a$@8/~2CcQS^,RYB.xRZ  Vg  6QBӍfBVDr7m 6r9Qdi4>3חy>jb,CBuojᤶ*F6  7t=}3]f# *M O W<" %!D#*D!B9ꮛ+ tⓀQ(pXd$fFJkL,4[}}_1F^߂ty:i3=nәj><Ĵ\ "'Uŝkx(VCU)x78!TyBě2WU\bnHJ-`^[z ŝ;zR]=O\ek'61WϓҚ~+ ssW/N\U8sUŵ'cHZwsUT0hUA6H9[%2"ߟ6W7٧b|'{T>?go^MoM4rA&#{B#n2q3r2P>Ygc?JlC];Z Ӎyz1ήGaP΍u:4?&{X ;-nSW= ;ӓUk){Bҝ ƨ" ƨc E$'dO\Uq:sEj*@ޤi, +؉1WU\%O\Uiw7WUJs͕ըvp>d8PuQuz??$~VnB[+k`cra[ !s\ktW&{Bh#hesʉKUAKƒL"cIZyLtN!(~*%-T1qtmO3L1﷛_TC/;Owҗ%Vy t-QݍOrkBֽnWui*JӷUKs Yk].9@?39J z &J&Tq nB~7M& 9\j'mK'EV[">Ek5joڨo4qE7q:=4>PDnL5{tRWlGb-!~q'r}U&D_ *i F@.Jؐ/*@"deVeaSg/ܵh+ڌ.W|qN |Vvu!ۺ'[SnٲAoW*>sݛO7h=?5A3A1揸HԹnx3Hg kbO=Tň5ݵyy߇ZlxݓS5Vc<]&v][!f-|ޗn;sB97GWn6@ 6#Cpע23*漵?1gAF!r'g[Kh޽;}ȹ/FDL:V@c*iZ->:TR$@zen͑"z oG^xu5dX4K<"'{ ڈ|q@GAGk-'? 9Pd_hG+L . +;L8qVJ)(myЖ\"U!h6.8Eplm= ah8?, yИf,?ĶI*:-tEP$g Cl(àMCotG.NMg.'oo4O^ٙK$~ϟ>;ФPxZ8żJzhԎ%K{@a P Rbp\l$a@)tP$!3跚bD(Rr>'^Cus.@|0JE m,23H W5Fl&fH-Xźf{jC#o}<8+sh)Q_ >iYOc1@#AL2O'SeT:*$b2k[*%ΦkzW]:$dKIl9 >E-Krg.ޣsUsIވKGyY%BnBLpY2q!]݄*x@oWhMm%GnEO ~&xI9[QD-h"O)fl˘\6?9ͥj[DyWI."4MO>#YWTαGos\XU3\ƨ9K؜bNJTCCp~#g\H*(sRDX`]୷:%:C5$DI'h+ٲc2hJ4E"4|*2s{Ll&Vǂ#}ȓuvؤ/[tL*:UX\{46DQ09`'ݶ%2x_LpKdaQ엎7c!h@>BkX#s1t% r9ӏRDt)XxIAzc9h"׭ko➄H Mfvk`GȔzQI*pL9 Mc҉, :R&Q!Vځw{_yig GJ| H9E2|.w?aѳOWW߿F/%]f̦Jw!@F9RiFS ;"\Ԟ+Q)tKvDEHfRa)DnʺJc(LmZN+@wW73$47*KOϴx]#C4|k/pG]\Uct=|u=b `l1JaG 0ccea싢 ce`I އTl>]TT@Z!tqA;hېA0*i F ˂@ B#MQ:-=¦Z{z_k cm^'8C[MZ_>N٪US{ݚfH?3G"&j4/sk-9{J!y2ZF CylNg'v x"~bFI"yačT"Cy[YY@oʪxNV4dp5b`a[\}ϧ?OSB/'-59scӌlQ8%%.t \.q dRx>ā rccQzv=&1d&cHٗ`vn2kuz[wrsr.6A+\)P!EYSr jm-zn2y}Ci9:-  ADB5*I;#tA 9nیq;]{o9*8\a 82v 8\`i+%ZN_ݒ,,)Ʊd*V_Y,@("£;fAn$*Ű[ 6k;A[p1*` Mx5).laZsƣaI$zoOKt 4VY[QSߴvتV^Za*O@0fW C&1cqRh-J),R>R .łH%bdF/c#m9k_jEBڲ.$.ܩ.R6M Hro]p_M?=L3klsɨa!rHx JQkP"bEPLYRE#aHQk %ImR!``yI=t$LEt!H"gƶÁs.Eki; v: "4X{5x:*}IfV%Z iU32 hGPF|@N 0AGc}9p4m`Fj}X:I1Fl?6Ոe:icT16H"+C#b<DkD)NE9kin`Ȕ,)ppuĤgN x҄Ij$fX6Xk䬑h0 !u%EѲ^^A͕Q]@(@ 7q.8X=8н [Յ)bn ߁LާNYFq!|μO62}qw_Yw]w/ͷc8;x@R+"Hp6UQ(DfU$ @ Qȕf C9)$ n{`.5ËӆiY̱~ DZIpg1kpA f0e`   xPcHwD@4:&(fE~p0 AP$:3 {_?瘨wㇷ e,U'M$h5 oCFd 4wxU| OJE޼|0CуGn+);'!xW%d "q?mE-jNcRŠrQR&R+Sd)Y.3 ^{e˫*:&A: -hAL cZ]i.[-GAɆ8Xnݬ_gٯգ7ҝUo~+E' L&j6HhPT sV!IVYxvߍdxcSRoF/Z/6z);K{b+Bg _qkJLrJ͙"ǜ M1!*nY7-[gN(#M$,x0u |ۄ_0vY4a#A29HysQ :c}5&k%T>M#DO!-.nxbCTedc(}(,[Ԃ}i꫹sp5~48`ǶAӶAi{Wz}{/ƇGG639߲;* Rb9{r E +HN 9*<W̩3u;@ ˕ʨ%5`4#L\Pm$A$<. pyu ##r)iEʘZ7&B˫N^Ȁ]L6^cJP*WW9(VT:-O.ͲB(C .5 ^湒!%-jovyXǬZ( YW:,CzI˔5'ݟ]7RjLX9!7r Eu~|s(m͡-K:CD-f~h;m_ΡV8 pTI0h8DpΊqyu2O":9iS+HʋIӲC9j.? X"}46KԱ(D-?xE+Zԃ,0fdI?!\]=\FQWO OPWSWN=& k:s93SYTr^_˙0b"ah3",DDꥦ^AU` Xy$@n#cu 'ن[5uz7I7W}Z3mX{~?ZX~Nn:9HE*L揸W`,nBi=0̺ͨIA&D bʠ_XJ)*67wkMMx2k*-։6Gngm0VTѝ8Nx[8iaoac$#+N!8ʝQ*ltB53NQ0׭"L왜\o>mM򞧐^w{ LC8-'$%xTB=cUΥ9cHJs<,u.ece8 v N[1:QA*0q" I$`yXpj~cB)7j<[֑aYqGkZ )$m5rVۯ }Hy#f_rȖQy'xn՟0yM`~C31db -:"4 X1z4i\5-QԈwi/5-RsmlQӼU4PY2'nKpU$*moB!gh՞2#Ƒz4‚Sw ?~q?O5~m}~$>_x(0CC w~3Eb &>S+)#16.u'0BάI9}ie/ۂq _s8  ߂3:hƑBŽ bB⚫h0,H(Όj0Q.^ЉaDll?ٜ*-m;o2F,c I cր2ٙg[BJuvۊԽ)`"wsE+΂Yv}K߫Tl҇0Y2^ !+I3v! XHe QCс/9EuvhY; %D2bSviVT[26͒q=JyYeWeGOL7fk#]$nvr9Is@''O'+N'Eh4TGhr!# #d R(=j! &0QSr|Gt1 UìQiUŦv>9({/ E@&[fǣ\:vEj#ص.YZZ&M1@+XQI@L1iIJVDH"bJ2#CiXNxd*HNw8 sY$V#,{fy*S*wE"Fghq/Q|A ylH"zLY(̌TX/JdYoK)F^%bɐW7 0ƒU9&0I RbMZKj,vlzs2.:{E׳\{;-/<^#0ņzh_z;vv޺mw{&>'ݨhRL LQqVeHYɞކAŤ:ϜOr}`r>BHgĥ+}q{3n]T_ 0%^'Cp$4.Q&Cb66$ \մiAY%ID[3:B=B@2)<;9i\jqv ]r3+YsӅO-+>yY(x D2+Jy-"GAJJ$m?0`xH(WucZbA}G5|' |<._ߴ 1rnbڭW5g,UJ?N+_EcO`SC冱oPw"X]n\Ǽ_:Qn3Ģ /x=] J#p #褽й*&XBqJB0 9"F/qIXOGz |,XyX+>J(Go\31Kʰ XqJ'ru%Po0YVޑꒌ VM<DxH'ܗ,vhD3] ZfyFfݪ'(-ZOJMEh2]e"r%B@(BJIž@1:jbơdЦμh! F0QV,vH2}lzt5p1PESY@餋 %YkcKcsF2&(;ϦvLugD1}VJ3LO4gio:'<6}<\~kriRaaWYt n^~GGYѪ1HvmAH ~pCId2X)nOH-88NkN&tO.&Sta}+JdT+hYϚ]ÕBp5|6LZC Mtk3 }jtrpih_ۛŧy{zj~aBzxuz\{js`S;?.x8 ?\:7e9͟&Fؚ=s{U`v`v9S<lmuZ#`4zQ-rqOh,ΞӮnTg7Tg76h'^uN. =sx:)$z+Dq$zA [&SmbBqTq5NAlt"Z?LKA6YE^WdsF@Βj_Og!`QCo `TSfSfixvL7~|?|A*o?{eJ!!!B=`_еkoѵER9oӯsFNs+B@1Tal?֓/I~]^]G_Y>I@o\IbJN+ؙqcQD)Lj0:!JS[ҥi]xAc$ V"cY$[P"mA 6 d@6uz9ԙlW zF*LfSA'|בj[qyQ-Mшx&k3tuDsgE4Z KDXSA1e#FָlJ_s'!-uȇ?mv@$LD !Y9ґ<+(S !;YdR3PKukCjX\|*CZ_VG3 hF)?W7O#dj`pۼȭ^gq44@4uw֓ݗtes{7-Կ+|J)-kۆd9+,s#gBZ/An[Ϋ\M~x2RVRN XԷ*gcrnZ^HEޮbpї5ff\AʑTu:OűCŚe9[ QF1$U9_jyX]|Tgk\om%ζ͊솦MWjk69Zbso&< Jxi-醟n_nfOos4 ø~5Ɇ\޾u{嫇77>mE+O&sҫ9oCQ*AG^xĜ7fHxI/f3O}r揧(gOjn{ts/ b*JR*jG(W+`Uwhnw9Jg/?9qvVP!lA?]b)I'I֙t5)(4d|A 6C:sNP,ƕ׬b[URI:1L>j1kH$_(AKhw.5UwoZXijL^-o^cA4pp/p൚?yoct?.,IgphÛW !&e4AhLRe6+7|R9(~\`*2AA2xIϽoGZaʙOh .W)叮#[ȀEِ4߄M14%dLFEO?rWBl/b1EI}.3^.;8p&ۣ `* @:$[NHg0$AꄒCA.z)SD 9)p3Dc4[C({aeD X=+YɎbgَ\0}?DR'3 %z-,YM% N&hJIQ&Cl,b :9E})~:3+$m,+egB$+"CDA 3ۮ@wy] G<&Y#+ *Uב/@⭮+1(btt%{}ՓC3ky8$v刂 W CJE5{ͨM~;2@[KʜƠ'sv _M_V⫰5=&h1Q`U# d\{G{QqygK|vǷ."lTψBN #+;1dIZw}݁F9FiP;sEyFd-ĠtV *I)@yv C*dQ,m 2C1pM`U51;;&7/Hxjx8+,yZV0|Iܖ_.|A{uxuOx|SYL|TͯK][|>\WMiζ1޺&MQ0La3XՓ3=UrqDaH1[dC6V3ӥ1J@(1n`h7wcwFg}2,Yl1 %jEgo3|TT4$} |EalgA|15?F|r_ ;q`g[x{Wx)7tVᗄ'}}'Y-+.9Ȭj B; e+bX kQd*87H[c @AJ!HDVl> w)JbXKPޮ}Yo`W~Ʋ.^cp5jb^5=i^`_QHdݵ& d-CY{lfBv}ٓ?^ۿz68M+5շS⽏WZȫRXszJ=^/pCQ$[ DԘSD +|)ct 3mȿX-wS?;^>/?>Ҫ1r5G0CQzB"DG1%[k[ *H@9S׼5]ug6VКf8UnU/Nn/U[c3+ &-g$bI 11T2w%(*1xAJ5tPHٗb tVTZ2v֝lD){Bݱ,T,|VYQUud={غ*_-~Gz 4q4| _>%6RJ U 栠w^c59H(,AM B"bxQiUŦ0:N {‡L $*;o]ug8P.Ejw:Ej^jv`[ZL.4a%2aE xc.J9AH'%Eł 5dFҰ4qT&@7 YFsY$VŮm:vy* n*E"v>+M$jGJ)`C'eB8@ uTˬcWdH*YYVCN*U` t|JI+tt$ugº*y\b/Zd=^xj3lePԚߧtJݭWX=>BG#' -<C/Q?;Ц@65k"kM`ؠF>eJ8%aucuR85Fx.xF(I;sT j|D%b}0^r9NX>%ˤdG56PNXUB"cD I`6d W ( <)@< 9)$r&46qkry#h[w`{σC㌚NhL,?}Iͺ(Q[}h2]G\dI5O)I/,И9W@FMq(+ ڔҙO'!RtWĨDltzHn8)l4-ƗlsG KI؜ :ЫڝgD1}VV1|.?gioMRU0FoMךTx_LWU |Ak3ig~_h0gR )DpګpgV -o;d}MUs$=+0 ::G':phzF?[߃hxU2$TXǚUݕBpm>ɧh|Y/%Nb<8iKRÇ  9'tNEGy~iƸt ؔq ni^z0+^MgkO.O| Vt9m9 e4<9ͭs{H4O?_U!>>o9#Ƿ%ֶd56Z oLI+x,S|2}_Lr^?8%6V׍E$a [^>\ZՔ2b`2sAƊh!0-Q[ֿ ~qp:#PZxp[{x5_˒YLrTʀhrq6 ?p^~0TA& O?~O?֟?O~ORO_~{GѺ..)B=ǟдinoѴfr ߥ]״;*RC:˭ꈙ.z|: ֿ&c=$6Q?&˖j+IA.֩Z QTw16R7(%A)"+I/mmXH>}o{,^`$%+"֌"5 iId=;u:qLP!;[\rUw x맸vܚqU%gx2)puuB9N E4ZklL ЄKD00)ʞ)Ǎlger!-X:S⎺krEkB/e"i$NJz C |9T<`Q:J=>8.|1vP2׹/j[4> nhƃK.Wo|"dj`p<ɭ=׽ٻxmԾ+l^v+bc.'W}4x8fm u獃vF֞V"o'~x-52B75,f›NOd<;S@W1ՑbEr$UL΂"RVt(݆X?["=wASQz~e1uV8?fT=?wםصieLm͋y@hsյ/~zRo-jå.ݎnf7W9ntYfK7f[!l"{n{5_^boܼ*iO7IǼ˚)ܭ9|KsΝkgg\˥议rvb=:w=^.w[I77`܃db-pI84 )xEnIޫh)(m H7ֽ]<;|S^cQC=A*!tk}G;IST4d\rCwx 3b']_{Ts@)Fhr:EȖ KE_7Iϓ-04ϋ-[4U݋CwXLd|C_sq. ⎝A+j3u6`YpUQnly4ᘆg 8 Z18Y4|$q`3"[CZX.j" 3HE@\#tAc:iӽCzG{FW>pyj510!`meFbBQA) 4n~ZFgzZmG&~yKWėzxSE]ȍR2ޠ:zSSR^hEhKQ)uPT kaR6+E#{}DඊbwAw]T.Odc m w[wt.'jjEZ"L4YĹ5poN_2x4!z_>_.FnOGPqo}ͣhϹBElUZ mYk0Jx%0AUK9`O{{v:P9DM,ƨNA}6ZrL⚀JdN8u LpP y͍'%>dҗ*W}Ye(?˷g ,s=5@rR'/8?'-Xe xX.D/f"Y{Ygz6[sx><gEȏ\ȅFVʽ4tnSc>eܻؓj y?:4_fDüw~;t||C-~8ݖ߹N<yU`_&#Od6~~v9|rvu-_?s>X39agUL( bŹ:P apNK.VQJqxēMg| QXsHtP7DU18ǠUd|DblٙhYލ==@: >֜,sp+-b&xisV3w\Oi;V.1c.(fwG9zp5܁B9*lM@+S!Gr4#LwCmĞ|!R U5-D" 1YBܸSe^0qrvU)xe{)x=i9|ӿ`,A ;RpVgL5X3a N8z81w8.1ho33VNnr' 68wX+~;7{otn샴*')I_tb# cKCMMIъy5U 9p}LIĮ:&&eBA#ЂXxiJPU&Uq%ez>&KdUFiNIQIBYd89 p)T=-5;ӡ4<`Ϩį$2T!)(5ag[W.0wgjrO+kqG=PLC5^1gKY*pb}^5Tz %5 (6;I˔].ӵ ٬6GJDmcu>θ7Pgd75r66|*cOmwrp7Y -V9\{Q :^L7S|=}4&Cr2GVdBNeycYqKqR<-EuƂ5Zjgt"b@TBh L e$b`]4g% AlϑD& *?{K8̝ۤHv>wo^Or?6uW^zW2~qWv% %'OOY?MԽ*_*]46t f?q`**!K#5^O5Dćw9;v@B񔝷lmD-Bd %voբHHh6Ƹec#t7+!^^|^&A[&xr$Y^ QLYLMUT PK5*kT$p'ҾrHPHk :~>X5$#"EhV\585QxrvvQW8{MM)V3[r]1Z{,[ -}4gynēY"atOպ bKlhpT@*"j4(ߡAol][UdB(8h*%Ӗs̤+T3amUh *+t?7}=gf59~3}+|le:t\*2戩JT-T .Q*XM QAS| V~#W~ʥX6}fDRB٬d7FXl1?>q-nm͓מbx,jh|˲Ȝeӕ]n1iuw+b-eۻ 8b0kɆlD[R=Wh\$ ]֜+jBVԼ10Wb#z%NL+z#c7q6#z,633 w^~pe)\}b~s,ִiZ<>C<99彂^A)*{t6>A>JCU#aV*ےz`ah_FYfcMM ;h(csOQq@np^lFl>;ͷCAnPPێ=2صɁQTZ.Ed SQj w%hm ZCQLHFek'0CĊNd&y,q sY%1 Gٌ3~ bTDtэ8"ڊb^k S *T䒕9Gho!ʇo`NN[}t`0!EG6ݘ9JnG:w~R1*L Q׀1]|.Y6>eȪf&f,uhy־0y~+duJa*yMҟ_RdV/#1vj%plC*,| YABy, aaY !1X" Ą,qFFA* UK2eڳ8?{-d?q /U<12*}ܩo\+ڴUP.]^O"Iq]Zz0o{N۬'Ǽ~4gJAZ.Cv>n[|Вoyw;o3qG=ox& =z5{Ss˦۞<ޭyIc?_xsPok1F|OE;RjtNo'd*Pr!jΓM>|e' R}Z[sI_rg|5&]~>|kJ,`EesU:մ"hBT*i?y>弌[zъrx)xTfd"`RpT88:`U zUwԨZ mM޵q$BeOpv<8d8X;ؗc}hSBRzx%%4bKt.BEYNd~H⽖09R,@` 8=pǙ{8aчn@ʺ&9@(u y/"]ƃM Xs(g)$oBh,˿nw5r)NL!6x]Exi@p:1,+, Fz)R4T CZQ:!ƈ8TALR FRA q5u4Lʭ3Ӡl#*rQ܅Q4i0 nP .59xBz1YshĴfz~{;uH39YaXf/8; >nkRB2=;88Gd`2%<;H[ͯ-5 hHwT,)։^/i>">F.s4B_Swj﨧o(ttt8=;?\"XZk(RVwQ5u >#]?/h8- IW4 wO{#Zkv›l0Zs}r:v吨7 ( >&WH+7V1K͂ "v:NRVxWB:9vѭRp^vFS6,Х6 `2=KB=dvHHheIg.5d4IN#/u&izqޖ%]+i(jeZ_uz_}WuK$ªcߝ i?WZW'iZ}56C9pVU\1eB.hj7r;6to i45߯}(З1X #&CaF&DFE+=j"JB5\cK4R Sz_OӰ7vjr]6 rݛFSd_wp߹ooo´=5ß/ Y+mͰ6oz&$\:5?R %eξ,JpRTdL8ّAnzvwV ZW&,~ڰv(U4"3{SѨ+ؾT4**FEJmF%H-їJ!ZorKam}JUNIU˜HB睥Jza+4>\ϕF@&yԖ 22MYr'4!sl(ػPFޔ^@:~ ~C 쵋nwK}u_~OJ㴒*,B1Dx㢊G xQ2m9(4Ad2jG^+ ȉ2RK=I% ٔE*(t1&aELrrcw >Yv]h:}c C};)2!&N0B[;莻son67pK_6g*k~|,O <¥\핗QSٜmv(#XR`\됽$Ju AF![c &3ph-Ϋ`3fEd%$;iy ]waP w89^)G Ai&2:`9(y282M @? 2o):oN[ɭ5J_+ujAgMB^iˮpQ`{!KMxQM?.kTpONH : s}B%0&蜼qHXDඪTr+1V'V ejAHtGw7\{XgYy=EڪU{v;b̖ɼVS!wee#v%+tC|[i޾\sn}`(} T"y!$xaHL{ъk6'-{)^{͆xA YȄgE# 0#eqbڰsP6%iyQ@Pvq3WV`'U56캣v(,/l=VqRH> ?m2le*سK`ß;~>ix{IkQ>O\ mEGO:ȬW V0v5Kw}/jvkh^XL=G7П7MgFMڏ;B+<ˀ(^}Ƙi0+R/ }[n K:-^U>hLô}Es8$qinu^)UІ{o~NÂKmjOٳI矶Jpf  Qp"p Jങc)8+v \qIP\up JF\R NW}"1m+RAǮ^#\I \q;`spU5v*RZk+WyIj6v֌ZBzG7]Afs䇝ёYhYam4:ڲ)ŀ2uu6S/=BiIMUV¦\Y0P(rXZ[f} ]M] u"ri[N5Dk19^P,!zr'RQxjX4r.JE˂8oNoڜqY 0n-L9g>OϚC&%јN)}rf֠BNǥAH6:@sͼ Eza1DS2$NF(ɢ.E 5&~zA?\1,E7_L5 $O9њU!'&e 9DE!@tutMAQsyf H +ag d)!ERT*0s5 fs#AQNjB)f7Dޭ0&G:3:;-KeQ;oe؟RU0.\ǼWA{Nր6HR3QYfdNZcJTR[ֵǏkֲ|à.XhĦcJMy?Ih ؊P6pt"XU1cxUNS+< dY]jF=QRtcI7Œq[E{SDE QEbGg٘~.%2]KfD] DVB Gk y2> hLMjS|?<3mҭG>QqWũYtg&Qg>JɕMQ N򰞛Ogg߾6wtQ)CĖʎFĠ"&]WZm^b^bC/uw[!`\h4jufGT2dN3ˠq, bFl6͎HtcBQ$2zetsEpJVoڙ 6&6#j\~ pzv`B+jc}ݖy[~Ʌ%ho?N`jzt|eZ/H)5/ǥ x9.{]8-}v 3NkW7>.էlHx`Z֢bE! 1Z.WJga:ژ8[Bv~7m[~~&hYŴ)h@lKe R 5r%84^`C^'P*j-<*ҀLl8LV+D$<)f\rc*/ 14а6&݊NƵ74.\pPcE'['^tb8_d^nun(59sk}'1&UQTD(k 2 Q$-7& 3u6fȄpN1"2:ǘHh01s, PCt3Y wRBؘ8[7UVC7G/,i:(xCբ8$ߚ ޵6r#E/m$&2Y쇻GVFɏ,b%Y[-ʖ "bWWNJO,!(NNť2\FTN` dBK c^;NHBd-k'ː=3βJ%El u;blb‚*H}TI)rKl?*Hbܱ/RwRC+4ni*iI*j>j! \G1͙^#1Ѳ6I]FR2"C*P2UQF$# TvmY/k~V45 }QD\kpR4k00kIdR7lI@F(`)}0(9PfBṡ)Ȋ^"^jrqR5k:qc*,e';\A{2H0@hb]F@h!#%!֨]}ŸPqp"lľ%P3rm0d7ڍb ~|&GJU/RXܗS[&Sk(RIRXoVI#+")gd@Xu娐;FVȞY.lDqڻᕣ殺8 c}s0gvrۢԒ _z+OA$"51H›r.LR Sa(J3(\ I˷4WakY]-Lk4ڣ~Yph1@F& \ %*gGH`t z;M\)mA'mp"i遂Í˼pqݏӺc8!L>&ǟW{ͺ KzoQjr/x2eT4XPAL4 !PӱWveʮXP9ƁEx/M $8J8G^vFz=HT8{0)JG|1RkUTyC)H)bK!c$"&H^Ĥ"DU~o _Fɹ GEQpl9RBC\+z8J{G0sznM~^s[ o^Qz@.>h_0L-E2_O%8| ;8''ZqNy8p  تy6(N W=A|5)K vAnǵixr9uB>Jc"Ğ _[BnQߚj)ttH|TXZkBH4t6jC;5u K<'N#x%Ҝֽil_뫺_O>8m>pCr51gkrXDheÓ~=ίm$[GچaZ,pFj+W1l'1g3}ru6ɶQ",6$KaGoq]_N7J,6qFS{꽝\rdZkf,wj%y(zqh|6-Oz颦_w2fTr?S{~/($O>çcpwx<Hzk~݆K~zв8lhnCSRr͂o2./9qvk\BjqgCd.GAmJc۝vזemz~&ёCw$+߸ĵIjg G`',%@hJO;l*Է 3R}_D$'C$(*"Qr6D9pPS# uu&pqy/xQBO_|먠p2-ݒKnƾ;p"_E5ZVę+m"WYUYSň.!Bw '(JFVlJϹzNthOq\d@ r=hT ڨ<QJO5%oj,a }(cruv sl]b-=]°wy'BzE ~A/dWXvWck׿X҅ZAn`0.T3os֦mM]Ua!gBZ%;N=pz VqB%!y!g$FtpmVeÂP]U;^[u;p'e.hQ30(uH (nSxP x>gxژόcvM)yD$m׋չ@hu׹/frd!eZB[W֫Nt:o{Wf!,qݮZ;=/ kݠ-WCj;bΛFOgD"p;sk+Ls֜MtSbjSӋ$׳䢨5N7&\q3ɬ ۊJ(!D| h6jIUb NbeU09 '; uD D(C;Ǜ0yl;l/rv e\b:Pךwwttbx FWyI)` tNA7j{g [YĽt?/l {B +.E(!@ F(`Au H_"}L0ϣR9X0-d85>K:n]<4)6 q͒-Y`t ښ0gtJݜnbk=;e9)]rx?|p(_+C#pJQ4 8#ŕaFFϥPΤ8FZi HIb9WIᔶrKJtNE2P&GW*@QA?A}.oUm1̈́}S>ahIBQ) K%VnZ]Dw6o۠W4?MM ]r~Ly~ rz{-F+XSI}ⵕNWRQ˔!i1o34]iͽ/i}E>%̀fB XI$ EBoň\J8.p $Ϸn6 @}RHZcLH{$344F,FzZwDI O}rj z 5b>4Lr{/*wۻ]r{WUn*wۻϦ™-_}^D-xjH*P*V"F]9*deBZ[aڝˏg=V5{ǚBzlwS-w8˗TT"&<&IxXhf ,MXa( 5Q>piԚRK# '763s *VqJ?-j] U4z.WZ8̔CGJ}˒CgZ^˲W%:.kk#GPR7GΉyY}9]Rv04ƿ UTU/%enQZQ/d[K.DǾD`$^'l`4R*QP̍߰uK)S;ބLNjUZ'mk,:q,{v2JWX?wQ &4⹝xmgݬ ߢ4(/RDLq&(<!)(*)Qt)%pO/'i^jhđ@@Ś7RI2*KB` 3'YU:Hm19)u-T<◂dDL9gV7([Y 0'(-pNJY),ϷUC31UW[Dnt?Wqi\c_ :~kz4~1} h]MpPy~; o38 V~x1YC cT$@ @ (!O5JJa:8ܟ3wc VV>H1`Ԗ Bn6J{k?V y>$d{>:u.NC"k7{wp~|zknǰ୵BZpz?mt7[393lj#>5&wgN5oo_.ΏI茵J_u8|Mxtx4{_?>{6;__`K.m -/kF.mI6'~)Qd2\\X:aTjēx"f4KK^9V [nq4K#v0H~~~Ͽ˯>t~{9Ts-낷O¿/:p֭M+4 ]&ݿiW2H]8ubz#A?R˲=Zg~$m5Rf?`FR䢭LFI3Ckbi!$ RF^:ذ^.$X3FIYN.Xd /"[-8v0dNgj_4GվPkYzJkؙ-_s5tΝ vUòYeʣp8Gۺ7[ rz-匁*wCѻR̡v6ϱNꡘOXB0G6h1aw~v:nQgLƮH*;o5vޥg ъGR4JcAȪL֔(' Ʉ.F[$A(L(eT٘E]3 ) a4"p+q!Fi֝ WYŐӳox-c嗫(bl$[rBo|,Zd$]&ZXkQ ,Pmyn0 @0{t΃%tiI28vF/)PRC6!"9X/Sfh#|c>Ռ~^x{:A*@%/KyW֦5Ce9O99;?,yl9'BDp3GTU$P(Y% y O!ޙ=8xey.:S^$ccp:li k]? S4^28}﬷ݔ  nwpN6K-@: oƩZZ!hYR(A D>$U1+4$KPBeq7v_`lG܅ys!њn\tag/sW{O|.,6/y@٠xίXMgf2/QT 8\nXgqdh~X~9rV9T~_pmnQPD4`C] JYd!!{| Ȝ3*܉Iy8A3^4,<妶~j0X/pn[*~;W7R KkQL>͗|εm{>g%ܜ94`dpLL`bNA@uXg|l6q*^t Sߺ(Z}ih#dɪLQN!3HHnYR[M%R:$eQuSDHةQZc*%m-ަŨƭvu}i5OYi+T+Yϔ}1(WR6c eZ᳏%alP]`01 :H̓h ,MՕoX;_R-B@R]5 UF HE1t)L6= cwV<iSMŽ"_-^ ken~<-f=:-Ll#@^Pʤ]rXLt|kRf;sb2붌\N&C!85hGT(Y"]( im*b+J LHg@`K>3v$.A̠OțQ*ޅ7 gO)6&s{\~E޿jkMʾQ<|憨 D%o |3ζMtC|;@'S_MҢ;iLWDa]0L4Vny1>u4Vn[23.c1kA+@G&dmiQ+qTL]DS$s\&)JQKQL̕JQ+n&X4.\ zm==gm쟤1h6YYj FBX + 52<] ސ00)Kd5X\ܥJXJ%؃ug?ȳހ%[r]ʟ%ג+tŠ,[*K0ٷ.䙠 >l2l}?[,&k):֒)KߡyDDjVO(( JEmǼJVٕLFJ2FfYJ3,lfB 倅^NEeKE嗝^N|?; 0b! -Tt9 <) J0*JƷTNP^R6 l϶" O۪;;L -:Հ]ܲKSX(zWj1 T q%e3Hi-3$eERt23C@VILfHf G( \tE5zNYwa/*0n "6""6FDq@ĕ4 ^dVDK) 2(ڊåmfݖR.ݮL VI`;EY*C2d*r)hUno]Uw6HjzD#tQDvbͬ䱸hpqŕӵIldINP$(dI;BNej4! -x:13@ؚj6໼ _F1[~ܰqgzȑ_gqmEHy,0 lf/ښȒGxϟ?rl\nu$>V*|яQ_= k'~\#^7 WJ(mI#`> Ӈh1z&/=FiZ-xj!ӻFm3QU QIU\ +e2o~Θf G}1ͬE1ͬOޔd2M0MOTqajsXB'3Ɓ"S,$9 Z@e)Y)ԝFo_Sp4h.75$:0PKcb]]s:q7;^>akJ R BZrpR{iR z$rQ*ﭥo$l5BVvȐVܥą`$ҨI$8'N:&,K#Ze5nl,p@9iUfYD𽎵HǦ)Z~߷qubUv1X w/^iB* Wdpx>/{s:m` 0r2X] c*G+HXg^vNyI/y5 CeHl0Jo2g*56*DYސ5a!b^f]ܷ,oڤ;^H2njl\ͨO;6 _]u$؛>ckPFEg7 \] (hj{FAiUC t@鏃Rz msM͋ͮLSG  0N[#&GiUq^qpq'OhPHsWUn(}0s'_דppR-EH@sUGe0& l>ao4Lkn!:.\3 7큲^a04^dZLk6 |%Օu1WYr BJ| a2 `;c \XWV`ۍrvo 1LȄ]:tQ ˶UA =]#]ILuUUAtUPڞ+k\\ O'titӅ|z̩wgˢ"I]O /3[5sһgAi=O-V]ZL#ȱ3tU ]Z JzCB>u Uk;Oh>ܘz23٥K$]/j'*hu ءj*%g6٪,p ]ZstUP[]{;`ʼDv{q5V#ץG}GUtt =ylu"0}$l;FkaG`Dg8}kB CwR *)RK)Q~H.Ц? a `x"JE%`p*Y!z:o kF3s-/\+ڶ}%T?\ ${ХزU37-~Tkt%Q~$`;CW2 ]mR>ҕ]&'eE!uYVrp"t$5=}eGj-kvXZWN+]+LKzݛ[2{H^?/>nXJl>zq҂7oߤ̙P!rD)Ù41u=#H1sw1KJz̥*r~Lם miqͱ^Q!=zXӖ4On{r}z]zmcMgiix:jR K%i}DZ_"/ԗHKҗH7%i}DZk}D+ :!YD%i}DZ_"/֗HKc%t)+N4i:Ehm*(7Y{e2qON;z* 3dVP4GI"abБ'wϞ'IՋŠ#n[[AsE$ZD1 d<(TNv T88gZ͕js ;Qx\IVg,sREN:m(;\sa\kwIT慲x{/fi ,Ӿn~5#w_ǽ֜kW\8ZXW |9l? P2DY Fae]Uf k7cɌ6c,fh+ZaZ)s>* 4IIF8Psk גiBľL,*K29Ye%g )uR$g q&ҹlflcp2.T61+4sO n6nlE=,8+w1?]ܳ^=Sܔ{%LJ:E(ԑ\MfX3&xJq3yE"hU;%CZ]Sg'o0 Bo %kASp[@< QIoby&E{T&h[w%6QVEqVDx 4B )dǘFQ1f2r4 P+0fy,ɴkp+IM3ccpˣ1[1.lL2ʅ0 ׈J2>bvi,,wMn &_F×ya֌i΢B NOJP ]2ޱg9£2Rж]L0%uCq\  mƠ%YGɷcf!mV21}~)83ㆋnL:гvoyIr(CB֔ġ5d8AX,oւJ (.C&ːK$B&bbADd8GǬ" T'tM 7~>d:,a[1x*#ʆQ39Ss* ǔbLHve~#FB޸fkN& J22"fȅg|ȤɄ@ ‡s6~Frh=Z*W:nUϋ=/޹]4Z5\N:Y1iCb"g!רsb[1x*b|Ƈ@a[]鬞#] 6x 0EAp}+E?*BH*!vF:b6e6ZF^ Ɍ FMKT&1je_Ce_yS9۽fLݽw\E>7n9$vz V_(QTӶBr9OFokE33oԊh(Z{'ptR=CC=mdg4\N%*(*ʗ|HDD!2Oj8UMמξx?8T`9H1J)CMn%A@9%*R6PI#8R uS ^b2 im@Q[EP^I2hq z[>>(!psݐLPQQ),}@p3&zZ.cBzD9GeL_V{x;p~EgYӚr6gN:pmYv&E,oYUTON d@+t~*BOE׆<4HfL>NN-3h DRu<L$Z1 Y۽=OłtvOڲ~adj4.B2#4Qi˝6y>Z\>8[JOFCm'2 M'9q'׏ha/OL~^b8ӭ߉乘(Njs? 籝,V.}2TpYpa[ҏ^]]Z8OfËYN˰vP~5,j+Wg7+Q8n6: %,7t+iŎz.+ }pۢk2\.DmzoSϔN򨤐+suMb'6߄ɩ~‹U޷av;p)s)64;epcfw.,Bg$.7{.j:ґ nE2{=7ho&1w&ޣtYo;/x=oXx%LzFwZ^o~&VW?~y …$I(I=ۀ9If(>:!Z\}Δ9Y }3G|ox w4o〰5!hE,>4.yib8;% kEsT(^(sތ e£ǪrPYSHk +F?0m[- RzZNwҒ,'ѻr2;{wd);`e&|~Юml|#żDaPlպᜈEm'@S5*Փ,ҕ{BOj{!6dL *,Z bYNN(8l|aqvHHGxi2mͲ3ik,.띠MބiTjݨjޗc5^ƘӀƎmL #|ph"EǨmК FJ*9zYr$91)&F䂴<ĴU$\IjRgQ&kPF iAz `ҡ~wZ<v;ss]4zx.!TGE2\IFˈw" $~(GdBeLXό$!\>u3LI.<%O9Q%a4G,&vݬ!1b>p=k׵3FTsAOTռO>*|!iP`(qPY\NO%*K>*KUz5P㣇3Bxݿ %\{z,\=JZeqR }\p+pOP` d*KթUVc,R"\1XU vracN.|`&`;/dnAœqeX7 o+0y> %fv͑㜔& dqs=cl,L [~!'@ XOw YIA)~ \U[_s06;q_öhZn J$~ϿSe'=e>Oyyd41pkdclHM)xʹ( dg 82Gog)d aL sBp6cȩUc,%3 \q fUS,-WWYyp%(JOt*+OƑ=tURWzp_E;J#t?l!'P\x[jf}%_w yU OR6V xC-\% /=g:킹Ȋdr5㧿]M#>DPswX6/ Z1?fvnqS‰!4m*cA$b;x0>h hM[$}Q oUN9рȩ h`8{?+͞L%p5TJTSj*QM%D5ͥ#SA1+E]4jEvѨ]4jEP=IVWW+ֲ,}-K_ײu숂aU'9G")%&&1vѨ\jEvѨ]4jEvѨ]4jWp#8])q\ʾy/jlXk;jpQ">*񁚆C4iw@S E㎆E룈W7<sL =UQI$`Rz NňTD: &80(EN\prDQ&h_^J5^L[*?i6>=q>,z)Jݚu/U i=oWadh ¼Q'íޝlLٴ̖cy |\MVZ/"W@1tn8vPE rn-G"*H uH ^ q4tJU V42g;2*Űf싅0Ҋ339\|;6a8r/ ?t<ڎy?|z|y6E8:;_ҍz ~|&G+aZWޏRNWMÖϫe⏺A2+`BTJ]ybegn$!; w9 ;>J9!fäNL9"c\ 2N@J{/5DG]Ҋ`xuA(42Z$Φ|>/Pg(^n1؃qW ]$ "4y]g-+P}ri F-!VF32A <*t'Cp۸NV{D.KRYzkJ(>5B;K1$xNΦMSP ˗RY )4;c0In֚Eza2& q6iJT+چPWitV'434B% ljLDl>;g; f`FUjq7Nc! 0EwN΀+0[$qrTe6g&Q,$COcDc3`~u.İB[mhw(zu39 wt(yf}ҁ 0YUt*UAT,OI}$x,zO=BޔgVgIGؠP&nH1r @hdW׹+}պ t2waw~8ǑYOϔK>}H5u}jHx4_.P~?=~ET텠;2.ŊhGuw:}jSJ*!H`f!*wBF1Ka<: %gFZwJpxG!p!q]d;W:xӢeڕQaɗ_\zm?ޭKm^w+`-MΠ/TMU3U!u_OLݵϓ=oUZU t@h<# ި7cAXD>$RR\KOjd.@͛;pLqA;m. A%!*,:+МP5,u > gW=HJ}>31l:{gl6~yHp?L7l6L]kem 狓eO q~{6GZyM$528+5 IsDyș$Vϒ5"ձ+:t<('OG@`(hJ[BC$KS0ڕMW@dzt5^,}iPeelX\K%Y*k *PHS^L2]%_J2[c@a%@,PhW=N+-V_; X*:VhLHtJWNX]nZVibv \yN_eL68g12! L(IP͵}=veiR֠VaUW^e~i%9z"4R "`re qKwEH{<0zb\"/z,DO0 shyrF cg9}ꡅ:@ p0 !**hEo3!HDdb~'n !t1FXɜ= 1xn 9jaKSb&=7zx|+[hELǯE/gu~#o{gOU;[a*b~>a5%a ~$z_qШKɶ#4:p>C2y@TZ$IEE=/uā?Lck2Oh xzcW5Va*D<*`ڒ$xt9cm.iFKmƙ)''2}}ïx985[ԯ/ߚj.5"ଵbc>{!W\9Rzi3}eڄWԴ,KZ~]>~|q|a2k^OMxt|[6{L_ 1NBk֑}ӶaD0 D0ۇmɴOe->/W=ߌ9:-v,d=jmX"ɼ"qE#e`M,8AaCtiӭQCQ\3[)?~9~:/3 fb-k(;qAZ}e@{4.ģ 6]ԳYXGA䗏/~/\˧='z/ tUM¯?2L=] M㭆.Wl>.2OĤ6CҎj;O&R~,6*M{HQ5ܾqIZ")'\$+eP)yTPk?qxE]_(hq28ԷXs փvwp}]:uTCGB2N‘ hIN<ad*f0-% BL` .m(cquvOhS\dv傼!a @ߦM }E染ŤєߦV}nxxK-Bmk@V- Xdz |m{#ELzH뙐Hûw^Vu6ŝC8N J2 Z5tz86Z`&>u @Up&WzNfA>Zm  ҇6fCBt[k,ASP\}+\d'>/x6ӱ6enj,ֽAܧ-OYYcsus uwu979y$V3_K_a~DӠ1מ|ZWԺ^i]-[s2u;GanRˊZjxݝ7ќ{ry(nּϙN~MHwtĢo;: ted`HgR٘mWlV??l"s=%#NT*+&M1E@0"*()y e:0ӧ}~ >+Bi,_j[zuT'xFx֟omR*}ӥ6Lj:($䨜D!kmνEdc8ɚ"&X`=GxPrs{3` ]Q#ak!jmz: &rI@*HZDm[nU&,fx* .Y+&h/.)WǮkv6= auosBdlzռZ>bnرɷLU%IlČGy, :rd\v.#.{ݲ%IkcX&3nt8I=ie==ymG$#K)*XVCds$ORD>.:۟S#E:#fXSgbzv#<7~*4σ<JLI6.5{Od>h~ښhqrtsΆi[0Kn咶gk }[2(ն('Ciɴ:\X[{/ΏN(.Qx@ ` MϖЪ涥)'y:-/Xx6UJuU'_ŷ G#+gbޠ~Qx ߙ܌RB٫;7e),ˠ݊`3t 1ތ^{$h~b;u 3F]%vuUlp RW'Lj2M𳫫0Ϭ.{fqnnRW#[k" S&[ 2M\CH5Zʘ|M0(uz_UV]WW򠮾BuV{OЕ*ި+V*q;_5x/E*16qKxX0|\ҝ!n4O5;oS| m0XNvAw|hBX'fEb&u}A?: ؐHmFp%͞m J=ÛLQswpx=RnRţ>qn^Z*ifbg?ߜr;b#^S >nK 37XbIV7[{`KqJs!њJvS۰I0'E4ߏ UW"d%q^2?o1Bk ͇1fZP꼾P?OK&MGIP+b}ѓ t^쬾rQ]g^*MQ ˃>g`s qM5JSwؚmZmZt]3L5[Nb:*Y(DfU$:$…qcvRHdwbrE؄ ADH8FҎY | \KLyiYgMw`WֺG:g '=I/3%~5 ˻H ӣդ]ײ@܀љlA$DCl9Uւs<:pqGXZAUCs3Oycjב !ݯ\NE$^* e0~,``!AyZ0շwǮB@pݶ"Mi{C[o8Hd%HHGsXY?|TBҊMM6x"08mPVYB"J<凅BZwZ?FYZ*猀oɸa 0D[#` VXK擬 ߞf߇I{k :O>1[kw ui$us䇐Upa1>FZRQ/ٛ^Wl{)aே=PYHc Hckt|!c3+ۿ=O@^ΫgJ`H- `M@$HJeboazs$( &x?_e'q!fצP"EⵝZdru}2QjAK)Bk 8fЮ-6 ܀F37<ﳅ1UyB[TO^L.^;fK+2{]\sKln6GpMR⃋W ~'){b|yOWMݐn! ;X> ~,!Dg}zhzzM6T"-qB Isb_fa̒8 K(D$:P*,ﳟn>׽KPm5(SAS`UOu d|d%ƞH\>WJ=0T[9ޠ|%,?<7绗׷/߼;Dwo_Ip`Xrc~ل W?=kU_5UEu9jKrC闀w t1f8b,na;w_GHۅz+>rI)pcPvƸ1:,Vk)C+>HOP+X=9U6*Č! < qX) >XQO8Qq;trSZ.Mm69qr#bA'%n1PHi;lP:OY}q"L^Nrx`RGSJmΌ6> PD)9<ZM?\+ȇҤGgٴH{y0ebHLYJmYpsH ~.ߩAy7ŠUpc MYt\@Z&]s|pP0vY4a tSj/;^s%8J*=d&˼iݗe7}_xtDe. @d ?Ep@l0Ʌ5M|7 >  M}ٶbr/& P^ZWp~1}לVj-=p\5<^Ov\%ԁ#?Fy_t;Vo|6VߗBA#,-g`ld 6([L}£EYt[I+1>(bxN V)9q{i@ R ) LZM<'N,Rg2DO"k/%%\ ΂œ?CI4 G:V Z1 iIs2Gzɻv@n9Bw@U50V0QTXlb?ME٣?:g46Un%~/̼͌Hp7|7Eorycj&g'wi.E?\? ?P>XӠ4]:XT__|êkqb|\SK{]4J T!eG@(k1_/ AKX|+ fɯMhgg>̚fH1WHRSX.Ȋ<ȯN*硿CQaX56%n7 v`@[vpD5)&[CrꅕUx]eUʎu ٛcQCPR~@viZȲVc΍zdʹ  E4RQL-ƂXEcN]T`k漡nND;ɾώ: nQbeAYv ੡"F't\S=C95s}Pz($.^;UL3X]͋)mBۭ_jaJ >yz$Yn Txu].?K`\qXt]-10 f1grH8+:`a|Qe WfHh`k\ra B($<3`M\^1Tk%i$n?ݗ/.vMzK@_˟iG<_x+Xմ0W\=BS[s2'w?ȹEnr/&][SixʵQ*g er+/HpgxsE-[6m<-`\!dWNi#RP Rad YE4{j1K[Û~fvZv$V)Z"O1S%Wp>ΙZ%Mn8geVr_#04~!([JL.0"4=s$HQ b㚫gZ{ʑ_%mdUa _ad |HȒVdA{o䶦-]d[3j[$NCɞWnS`eTX}H9Q ^%*'0b,h…puβ_~ksK+Tѩ%u)ĔA\$kkcJcȝ\\~ Ew>Ai$fU1}`_|b?P9;틾y"}#QކO*aU<|ϰR*/* _SO/L~6Xw.0œD[+ 5AJMG*ŘI/KK/iЫ9;2c5HP!oO>M5ϸ~ZnVƿ$/1XOID"(bX崰DEiqE|of>_}j٤}9,+ZVJ\yZҘ%魄|kI|1Gߦ{.i$8GkT{n3֥ݬTm)EcThM[TSU@$^C=e鵵؂&W@~:]̜|}}3f(J}3,9.hieRׇGݽmf+yͣ:uF#iBWC5%_zQc%ʶpK%,[d|.A'!Xm:g*q1 g3ºڜkHd*/+sM /|"n={6RwnR+; ͔.b ~hٴ}qqOl[V:PK=*!b& kC.c b=VKAsw\M<K_xM0rN/>bp|Rn\D ʾw馻J%ƞQ]2Ϟ+c ^+slkZ1z4Ѳϯ"J1lTY` Q.ɯ\b5y5/$~)y.GݥX1Ƀ[[kU5Sqgj{ s̍ڪ6R&|vT,Rr>&Ѵ,Gص]~vmN/牨ߪ=$iA.{wv:]b#zO#<9k !GPt*xt~]yS74鱂q=.xփfzqp9nVD!! ־gjF~ݮV?ֈ{?bQ[щ?%O/T)muggWۣ<{tw _yu7_m*>>0nBx}E}DC]|-em)=3m6]O7z]{ K6m~poGӥx clKz;FR8Żo-h9mx`=_xyۦYb߿\4]w#Q^pȳj#^l^ˣqpQ+{6]qG3ѻlGo }uӑ}vԷG[4ίu؋*PGcZcΞ_X0E2haL(GI& Ľ WyIH 1"xU Y,4q{ bI`n|C].L)r ۛ[kvڑyȟ+s|ԸzM\"yW\09cC];.8C}!B+qTh4Φ$G?LAC [_c<[?z=.[Ͽ_g1_x w'˫R)q֝RioplXSEϵvH.Qq/tz\;)!m6^;i_?ZHx=R.YSޱlͺ3ӳRʻ;C-3q<p?uޜk> {S 6F߿&/cEެTUluؓfx>*q +g6%??]dLPe}]S^l|o?_xq|Zۏzc&n9TsY!a[%EnkXsH.kf] " AKМ+\'|BS-ƹζkvwӝKZNњ1ð5m ꢉ9Z(Q,sͨ)tKZYiCv~Ztm'XG5ߟg|QXkWץd+N,k3k]k;P 19o]O^3[bOU5L J{}]Lm phJcsm c+?!vora:~1CDŽc̥aUBZ{ԃrhb0և{hl &udwD!,?h!jx)3P2esT|#P8h>&Urgm=XiH%EmhN)=@'[blKaIV:zWoZZm[QHHI_!I1 6AZ I4vCvxG, BM mA3f&GۮBMD*1sË:kuA,ڳBk=]:#oE ȗ? *ќQ'R.i,b@X v,*50:-!x8;>v<8 AeܪWg_GWe <-AµؑMɗ#`,t ^u"F߶R[Ɍy\GY(pH(rKȲ|dq&C 4]@Bck!z4:Od3y"ncM/f& ɉbT@TԂ)!&$_e atco&QBLk^6ZpP 4=i)x:p<җM:f%5R@ڣԡ*h`>.c,Ht_v E[*x (nGf6 ᣻ 2YЩ֏L>/ѵ_E܉ipn۠&k) ] I |u x~6 nz΁6D2$- )ǎ`FKEe`w p.%$ܖThD;SQcwp axEJlwX-&Y3"83imDጐDƤ xd.n;ycmFu 3By( 1n@>fm~t16ɉ*q{-'`a3, N&@529 h2+:i(Z[Ko4^NP J6yxE#Kpaf, Nd6-@<|c79x~\i/N LTFb% tqu.[RH5jJY:Z-X-njLjp4vUh31+ DW! mK1토DlwP)Wds^ ڪ":ТA,W  2X1$dWRFL{PsշlbX"@ e=a+CkRY!Lj(HNPj@#k+ YT1$΍:jT<TuIc\3QdW*ci248'iܘ'*lxHZ4fMLDX3i*;TpR=;߳,sPbrcT oRE\SLU|(4Dyc EHXƻ9 Xfg@ T7օ̴~)H`g5>T2'ECQe(6tqH:u Bi4ol/Z5Vtiga+|9\ Ye E!EK'gj۸_a&q*rNR]:/0ɐlH曨EZc4F0q1x- `kLp^%=vE@x',By.6l0pMD藃r-ZQ2 f2g·"/R&!8zf GDt0T63&לF= vQ[yΣv1dxUyA%l :$h.7>YK9G Q-DhQ,(1G/""}.T;yfJuŝW ̝j4# ^ϭzsM8 \"q=צKcKBỉ̢!' UW>\Q5PƝϣ`{/ݍ H͸*W$x93|+ >ZrcɍUqX~@n,?7I^`LafR>*mrQ*~(QXW XW|\t]RЍL벍 χβkV/]6fY[5C 1ӋM{}m܍MW.:gR+DM,T輴|ne=+"VGyв ݕ8wJ?U+8h׳!WkB51EKn+(J]N4͸RĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhĉFhϗMK x8@\ 'Hks'sD3 |9&/~[r@x ˚d4?w@-}* VBpPS ԗZRKD-a!3B'spq#2d+4.g*$ |0ӻQޙ8;RKl4هR7ԻG7ō9yŞtin$&g<5tQ DareNATg2ItvFL5c!N㬨~&Ծ }==IPYM+p <؉bjAߣQ ϣaΔi|1pNN~-=a1T{ug+}Hvn*!EpF;1+v0.[d{ ~2vݡeL햱;_3[ؙfk EǶ-|T[86FϸZ˚X'xsϓ򧭞A$'/O-s?VF(9Rt;sDݤLlV0<hCɇj47*3jZuF?Y<BNg2C/cRVcGWYIb6,3rpYN_$aRBXڣq Qܧ&FSR:r S(ӏ+؈1W(bP&}B)!s ͕TBcz3"Bq9sjq ]=Gs=J2W(-?xsRJN9+;V.q"ȏ'e- r^xҿ>| d)ux3cZCˋBY!& JLZg^+=Rw Hcs # 0Xl3+&Eitl3F>\Gc 2MN Hk"dh~Lu_OnƁ+RdB/:r^G~79,Uu^ yRfFk)CV?y]ti8p j11I.nƩL!PÊ AV“ ~p[+:&߂w톝N 6?9mӰ`b*48뽙>034 g]XS͚ge|<NNfejcӵ78]6.Q[YMҠT Sy{(×f !CyOLX|ej)qֲ1ek< 'Bl |v0oOo5uZKjfΖ5{vŪoo\,|UzB(?BGТ Y~Dq le .cGFFԖUX˨Rk귓eiIDڐg`fRrBpxx_p\WqQnךғ }56|bKu?_CY3AW|xf pewm4/7w?g]J0O)J|rjzѻԖ g|rݺbYUoo^ם*8n+lxI_J25W)tayN\>8SIf!8bUۃwPZ3³ӟ"u4':TGu#4͠j-yljR 4٫MnA4s k&mu8] oݷ{K}OVk~ANz)5]|Ri,0ל&z]G cl'a]gi N^vW:ru6'<a<[6 vln ؘnȶ{!Vu>iTj'_ .kgbqow{zz[:]bv/_n_5b=שz݄]Z1WHѓdzvxVw!?pt oaF//W4 N棞>N_5P (-)ȟﲝl`+-kNy'C4` B7T!tEeՁsDcv <TPa\ O)=a$]Lxx VL}_c`X (UTer\~$|ε157mKtwOٕ6?ro{s to`UԿII/J_t^ CuhSzۡN1 1x1&7tGǍ,yjK{ρ>*G@(lc@mJm%+쵈(FBId1bD(v>'Q-ֹ`0*ebQ!0 ]ZcfَBw,1Z@U ^y}w:>yY/cn5!.d*L~!PϬw6 zۄU~%,k 4_?M~`E=]~F/.kRN'U*cSGr'1n={wjq؏釿rzV|,5N!.>u/v݋ ҃ο{J[ݹ7!;R/yw{P 贳>x/ZS-fWe7E?0y L%]P:uB@iӼ V5kـEG9֓ٲ0I'd%`c TD(:8! jꒆZgm\qoos { R ſ*0~wc&\>; 9P,Ģr=5`mā{(?ntʏ-sʏzz(#z(Cɢ'AgTL/O!5;'ɴy'؅L ]l*.B(1\."5ڳ%ђ^'KCd׶iv|o<T1Ho6z~I,z\6:0/L˕N^/ɍƙ8l_;yitٵt ,kS:aǀ2P+ 5yqG{T]W:JuDuH*ãRZevIsB&dD-q}q 46Dv^~+2[mS}wl޻KN9όѨd5)9QY JN(s^RT"/5*M58Yid'qK5x'J`PT<$_ FvSYJ7v g=Newޫկo,zm/+@T{jo,|Z84~s/;AlmľdLur A R%_J{t%CIGPZ%mttg! ɑ}'q(MI6al(HSC>'TEln􅱢Appvt(4DŽMh-6(y9,$>!ThB2ΈVjSPxI@t1`0ms S™ k@`!??PQ *  s]Q\K_sc@@Z'n|2SfoJ}p,^('M6^rkJ=l7P|ˤh*o i3>b+LvbMZ.(bK 8ZFCTHՃ1(- sHhSqA`h4K-clJ3[le<ƶP p-ͯ*2@Fxm,.&?:?O?f7nF\Y@ iLN!J A96IP=!(FU.¾ W Δjx>;Lla:Ik.XfX`+xt6Sڦ,RV+2d[[XJ`/:#S)hTlR$";9fn{~E7Q Xf[D,`"J.*Š`sN9 (hE-XMёdHb+uFv]AdAjgB lՙbdOZJEl68?du`8[5?XgUThE`LFu5#Hi_d+H.e=$A:I {d(]܇]<{lu<<]ƾZ/y>ȭWe}Ţq{PxGFf7$k^/h= Knyt|.xxܫ(  d#}ZWZ%TJ0?FSxűLHdzԍ!\RMHKTQw`~*`X^j_o?'ޅ@eo#]/@q?nC>4B l>Dϻ=I ߯e(]y3I$ Desu!H!Hq#&/'bmB#!`:UNH IGH̎Dy$EcNQ\ EV[Lt>o.@-!EI9 8 0I읭2s1/khXxR'u$ggNo\u-üz^^z3<fԹ$A9K`PN1SJjd'odMuEqH >gH,K tzؓjM @;7%9Zת7Ύ!Ƞ)ǣ)2iX"%͞ǔV1E,""IDND/M C=vb {-9@x?~I̪IZnU_]go\B0"{N׀%rJburvZj+l) M,àcGcRS.j.l؏u叔 6G:z׊2d*"K}śb?V[i#D3+NNHmJĻ (wem$IӘ))Cjt%OVpRR5}ȾH㤷E'ƇHK*hs3{Y _f'8ɦiCC"r8DgRP;4qa~6_  Y`&$\R.q1BLo=wZsmII߀Y60\>yΆT"尽VWjG(Dt6z i{͚6 ^>% ͨsR귳Z_Kտ'^N,fw#%3a.^/UR=HV /'% LYCT51~wMMՐj!,Nh}(~LvAѿvx^g7Y8EGZl|?LH`T2\C6YRGA濳ЅQ=7SDe~~z4FN.1ோ 6S9]?Wo.)tΚX]ݛefolݼU{ %/z>M1o&sn-k `w nz Ԝ67eHN\6ICeґ.@0#h2!9[J|W"UB}r%DW!/$-yWbΕ ]Lc+07*+ؾ$&@J7SWG]q('uexojO}xuuՕВ#G* ި+W /*IKw޺JRκzJ%z G* ,G]%qހAa*II;5+8@` &qXWIZ*Ix^=?DH+^kY]O\W]CZ@QWRWwC_ cp wYE {e`-^-Q.dvrJILнѬIƭc5IIYY_f1ׯhhaƦv}vϙ"|vQ=RuP`wPi?zY" O [6> [V6^=^|y|ݛ%E\(T5q&#o( ^G"J%뢅AZ5)qvpG96j2ď9=F4+(߲`Ra0wx;:Cc`)" ~xfB=} \ْ8OIldG6J!۠LnL )&]cHvԖcl-{lS;/BD쇆jTL*qS=n,ᓱ<80c|w ]i3rE 3.3_9v,"Y=#Ya3l$h?M3zGzpU,_o03w˼ ´8g`?.}s9jZrnM*ZjOLB uC+Kb(_G[u|s |!3} 9$x;'-M*wPw* A9^gD6Ig YӼKt D]+Շ(.4th:]>WXT.;C|NG9u]qdXKF}Z@ݎHd9LqkDAuP7roǞ uC1s=Üb0 S4&\iD XX @r HVQ7aQh}&R/5e.тFQ4`S2&"2nM sq9l5߯_Bp^h GZ.OcUC rzg!`fL4xNp@Qi AJR8cxe8B+ịUh4BkM̨3h 5Cܚ8Wv X)}xl7\*sݘJe݌+:G39¸jk`P8h .`t*j.g9B!=wWv2I'⇙Ej9Vt d0d7 8?MI_>mDř }bFۃCbt|pnjM2*oϮ$0em6N u-d+iMS\_yL?wE36¬*sΙ1H9& tޙW蝑;Q;Qq'`44q kZ*$Z)}02Hk,h"q{mlcG eZ30{-caxQ2kM 3+U.rԾJ{% D2D؊̝6QpąB TD VV0]V$ڥMfHjX P @k]`$SƐHigB g-0^ f:Ty @X{UaeЮwpkR%IfSZ0'']FYdlMI<Ova1=90&žJ5UJfQ\-A0f ;FTJHq0tR-U L+t ƜV"mؚ85c{JkP]H[օӅO gM.e@W=,}ӹV;ί^M?m5@H`d԰rk"cV)j J5FXi(Bb'tWA7$g0GPmG-}q|`?(*LNŚ'`\ŋ pA%==3=/vEk얇Q?/HPZDFP  5N$2<"ƃ@FK⨔^zۜVvS5D# Ed$&9cu4(&LR#1sO۩mӠU y\*Erwr[ݵ掋&p i>jBBz\Qc-ǐ"[ClYD"y&~: 75U_sLqFQB_(X!PZ~|uQ;'B6iB'c=>/ eA$e &MSTL-z׿1/W=hɵn)Nlh<'׃"u5eCۖǯδj8l92A5SjK2鯿a6%(m?CK{nUkw˘~uoćKدbq/8o`HD<\PpZ ?"?Rc kN9f|ASuv`~>V띇oob׶ڜ1|N }('n'l)(.By/s5qM5%ާoI,fp .VQG%>UxZu(\G4SIRPv)K,-i<ƗcYh/MH^Z!i$̂Q1Ց .o'%f,퇙:R")9IE$s2Ja;@>%@ 0B'1}p,*m05^Y?W` c e<"""G]raՑwj ~Z@0`t:<6B[YBȨRZ %Q #H|#AyV ,8p薌{ Z K@tsiy2X`LnkuN'ƅϲY?寅`8vf2g1ڇOMO  RbA?Msʌ򛟊/ͮ>*f27`˫ro^H6~>-6wF?fv&7tS7 FR;*0 v0bǣ|97ٖ)Z8:_dݬukL2M041q|i1KbKF+QDXJ?e?}}?L%t!GoA rυi~Vh]FG}>rI)PAcPvƸ1:,Vk`(C+l~l 3Q6*، < vX) >XQO8Qq|:tav79fUAͪZjVuxxvFzժ.eBJDD-!hJp9zf <.?>y:o7UQ^[]G',@I)3-`re~9?a9`e" T~=e9WEB{[ .룕s99/Pw |/,atQN)MEynb"(ތ.] Uah=<34==8BRC"C4H׬g9Oɝ՗Y0R+,w "fv4:u4{eq*gRh9I'x<ٕDO$[< I"cXEBEdk i)#Dd$O+' _>GG’s=WVRe X(pzH#8<*$0Bag׷'Yd`uբgm(Iڅ^JQۖFpƨ9-Kmՠ SIt1\iLʜ7yJ0MJJ@3C*DR ro4#j8iS&RFԣGuep;G*Aq0{ QƁ Dd?edR*Ґ! 7SZ~ud7UjGg-7b?J0|T#T٠F :c|3 &q5-DM th6؛v}/_yMHD=_ܖAn#[" cAȱgzȑ_rf^d1`X`w A%qH^˙LŖdKe+2eCbuɆ񹎙4H1=~,F b3j@NfQTQN:hS)UxLE:ih7Ӎo?!-qm`k_%2os(բ=K zBNA~|ы*vE/Ru@/3UXhӛSir#[(+a{y"KS?aB(V`y3}q3oM1ZqHJY(H^YS񊔉iFMGB9&1`jw[#Wk1;,Fѻ[Ɍ9(^;}DEFY`PhVqAK*EFގEF"Ս#2WU`s<5hU֛+J=\Y-4d" u:>0Q%\5}hIJ7|KӼS*vNҤN[SH.;zEHXցOZїrnk.wk䖪S"ls'?cb9aTG6Ti:lR:3†68k\ ӳ>X>iWS'cXO4? ǍgB'6kRZMr~WxZLR b?0H˽oG/,oVuz]Wuzr-//Ng]IIgSBZ"Hlc%&DR!2G}^ =j<),v&/0\34e6;p^~<ԇ;*2X~:٣cGOs zQ+\ōL ~:ylE MkƯuբ%ҐZH~A&+d/p+R'ѣ˸; ,aC|Rr@2b>Yԛb5Ve1=i(S=:'q~ j,Xաn:4P"߽Id@d%]QFQE.?~G>}'|[ ~0*zt ![prn8kQؤ@lyN" ^WL9ޣDem5P>V|ʓYKOooUi`}f݋dl~ɽsom6}?}nI[tb"{[.JV1Jv{Ӯkt)/an6WɅW*JP-7VV1yjFA`v At4xY:;a::`G=B@<ǥsͶ=! Jˆ^ds1":0vơA'˪@,!zL@V|ȒBЈRF hH"Ă6̵Ec { ͒(ζYlKZ2aYU)3 +٧NH>S+[4ں8<#lcP^y˿84׎Y4tGZJޤyն-H ؀foPr)E~dU0T7l(vlӦċZ[E +ܭvdum1\bafFS (UTz $1T7 |i  `Q"b1I9.)m3ql =br[w٨wu}j)7jK+Z/m12gbNyԼm93BAR(7(9ϷL =R*c]bzA$LʉA1&&"*˖T)#jnZoVm}{|lmOz<AkL`(vLދ G[ ݻfRFt֧1Ѭ a6ɬb]lu8(mUB'v(BFclJGSrMR JcBA_mGDd>JIJJ*$zQVU[U?EgԈmК^^|>n3%մcX'/$DZ+N{Ru+ޒc kj 鼷! N k5&3A' \Lp-T!lj-Ra U[T7̱WT<)f KVJ+26a44c_[B5ƒ+]~xK5ŗ;K_vy&wpv6tva>OlVdaM)P\B6Ls.F]fHiUͦ.Z`tv}Q3x[b.V(DdR]h'LaOW٪aElZDhlaEOK.#Nz#" sI(E1N%ļ-F" m5P2$qh6L}atrJ3("dt:c̤8XNh%E>fjb8kHfZ]hݮ*gfJm%;C=X3C INR¬ц@$vvq(vk]c{h0aW'}_7(9 XD=>7F?hkTrg2<_36̻Z<ߟyֵoscքgg+|a9rm_wSzy]F[6n?ARs={}=^%G(r[4y,>d:JudRl2dBl'~EB 9?L2Þ̶ʄbBk#u*`zQPzVH2{01hR4 O DNZ܇ds IrFFF~$Q MIZC{N5;L5VWle,>^mW_? }-[һQ&K$BXdb+%&!{",v:5B 3$'CR)485Є53`@imxp$/Rf蕸 Aƙ2"H˃3.* 9J˶3Yb%IL|5 W}B]UXi,a8((ˤ>,|HP橮Е$y{ϫ^1i#, 5:Щ\I<$c W Cɖb(*i؀E}6Tmϵ ܆gg|&waFfh4+>cm;ִ7#yK8_DLu|pc#zIw8SgY4:,IA)Jg0dьPAH'+0!.e髷tn2 +o{sp}'Ym<٬imN.wu+B}Pdl~ɽso.ۆ~16MOgeI{8+ }ɮ[ 2{ ;kuJiRD U$u4EIMA$իwVaCoڂ$}֦Y`rܠגV-)ڲ}E,G&taiGfr:[00>{7 l%m H?ʯi{L>"lE. F!,~?!UЕWVb[’`0(0fiPSeg<08z}2NBaP~uZ h5- WmyQkqо- PIźzRaزxHn'_T}7+sl<·d/߿xؤe+|c/XSa3|4Q=@.Jc{zpe[U^@rd8FBrHub<;DnakC™Ăum2jIƥ&M#1tq $(DE!u:v[괞|n-`&wR7K] IjY|{OMVNP>\n0:O/cCje1C4s#TQ(DfU$ަ* Ƒ+qR-(Ҵ {`վ\hftnA$JR/gF2a->`V+Q1. .5bmjrTxGDJNcRɜRafXO1`@Xq 6FJY![28i/Y#B ,A |ѥU9:r6-gC+``B=hfr9##X0J(zdw.#ݥ.(O(ULE {2X`aƁsFd{0JX%]t4cDSw03?gIt3|0jrx|1<"4>e'fM~ YuF A0GZҲ<}0I: 8@6dKA)nL&O 퇁Ϋ{o%P0 x0XRD.R:t1BL/o}wT{4).dlZęo a}M3I8i8"qebLpT'Iz'9Syq^dӬMkL2I41a4v lQ̒8F@(0n. ~~~_'ej]]48N Y~B?DC h4VFp'@X*VJXJ\bX\o $^|u?a^ѧ7߃ip)Ƽ - W[Lۚ櫧15Fmum.a.K@H-ԃ b2#A% M۔8M.ZFO|>G. ?uL R̮78FJc e(8cЇ p c׶Qg$0d`C<*hM4HYpQbE=DaU7INHk4q#"F-{Q|y]hVua;"?B`\0s,Op%<@rFϵ,wJÃ&.i]nV +XXC]IaBoРJ;L*p8}Ff;:ZނDp);E!r#ȝ&&Stz9xdvjvQ3ϥB̜Ehyp;aAA@SdJp qA"ڝ=2:nV(']ǠoW=Nq>ݮ}[wk?\Ct(%;#,-g`ld 6([ &qdQ'6"-Wd N0<`Qc$ vƜs8Hǽ41j#R ) X@KxN |dtn C6IE9h.j^,yw^Z!;`͕E1J v fB r .b 1`܃)!;MtrzǓN#xFDO$S< I"cXEBEdk ?`Naq $ x'yZ<\7n"8Yrg°_gxPse6:3+xJO}LO4|-?%b#ttێ=C-<&&Lh]A^S"zaSǖ.U75L5?"Dz]LE+0ttzZ[&@|*˫H׺ 8Z7mkX UK+ndeHf>M3O] X/J\_졚T[]I.KIjڨ]\/Ly?KZKIWc|n{Y.( TFUhȫ1GM_'h40w[tHV#6W߫|6_8o3d(TNr /nWZVI=jQ: V߂5`́V.dRu{HNqkieOHqѝ ڹZsjųh}H[];[G0> E4RCL-ƂXEcC]T`k漡]P;G-bO/vh-<]%=5+Nt0<5;Tkg( ga;׊KqU]Z;vv.h׮]jE]rR.gr)S|y i/EuҕW%#{˥kW>CXx#x4-zxa αQcj_kxVMuiN~_촍^R$Ie6g&\gr9#ldSɠ'zRdrXet쵲2aDD&pKEs=j 3ꄷ۠%ɎSp*NUm=& ׭rȎf:<10c>7TeMen==W<+!{)N썙|E؊>\z||~/,4fVAqXOPnvZ܆gpR L|6+?2Ϫsb/1-3|4Q:1U{KMȢOlמMsE0,}F-͢FbʺZTtYc0,,ROu0E6/"2z+SV_޾:\ҵ 2Ny'tmBW쌮M*6aQk)vH\%qjLvE\%lvqWP\1IE`#*!Έ+@W"+*aKض%czKI.qRwE\mW K;qŕ"\1$mR>]ʹ?~׫Pt`!L{)c./59,S͜asU x鸵V.l9>yeHa *άAk0 H'mjC 6~J(y uB+ !ɬ0Crv%)"+ k}=Aq1JINۋcF  {<4v*e :c[@q'*qV*\KgWG$CRPs̵q"@8Pe_7-ly$uݒcZt\XP3 B,Ui 4J[`57 i`ݵRNwE|"y_х!?NH֛T&e+7th̠(TM?/-8A`|sYa穉::rJE R,f RJ"|'VP5܁j"q{l` x$ )[X1c2&7"5[!--CRdom*gKR˾H{&Gm#=-d68PG\X/@KLE`eskehot`HjXDGE@j]`DʩR AbH\ˌ:73j(1c0K8U4tpqz-uZ]s4OG&QRT]~}kMNoUkAXT`ԤPQ]IoU*|ZCj= j<N޵5·иCU8lv+R*qMZ/8$EA|l40@F_|;(㉢( rn-GJFU !!!Cb;^{-THQԾN# $D}iXݒV)& Ce!+, i/ =2:#Dy9mv[͓?oPhr3Ng߸ĖH A(,G rH*Kn;B2ֹ;R}d7Yuutz)By+ӣ =5E_H%]Od꾁.u=]ouW]kJo|-WdgO\:邨4q+kdNĽ]s}N.uDt/t(^4+Sntr>I˜0TsLd!qT2 M"jmQR8I& #Nzc)DͥA ) JzN/']\J4s2Dz%+hhnûsnHQѨx>y˥5 ϣ6^oWtb%4J l@) +|k9Sᶎ2D*B2/!X(!(;, H Rzˉ&vB񃥂߆P)H+V "c+vt@EG*#tթVdIuF@}U/*W3sN'5x$p (hL ? \SD3'S$XUٸUm TB[ mh+[zs+5(PY d7S"LѰ̭|̥Z|e-|eۋLe?+ieؐEProdEfUN[V9hE$F't'~-A+t'CZ4j I9"B H NcXPQ:ʆj0!2&gƢR:PJyIr.<%O9T;RKya*^됶ʼwL.?8o 7b,ZV6:dyǍZ(;NgWMÅψJ+D> Wv8c:x 2=y$0A@7,39IŔJT !̋&:'," D٨d (;V;o-u _+{ @.W«h-oʅBn xDCP.*#0,y jw݊M_7f|{n ~)$i",$w: I $ ׀T4UR%/Q&g-E{ܓ"F+3F:-!*.óG-3ZzʍK%qI $ձ}$&JF`s^%,GAᎈ7BL^ǹg%"8 .-c^q(TX/8Qe.S 4HG*˿w,c0"nm wB{BU Q @,F*ɾ'rzɹ)K!9ńt )bK.B s!9b{JPZ]Ykuzi}oq6e4v`'svߚp_L7dෟ?8'AQLJ z8 s#5(P'@f4Y#Шd #=$GgJ24^t(L<[%sˁ=ΘPI鸉] q 8oCZ@gy, ?- 1 ,Mm:o"uWKEB`Ry \84yıˡ]?8iׯNW0*ripgLMy6Ᏽgׁfbb̅:|>̬v82Oƿ\g!>>{o98֓R=qnhk7nGܴ!'|̳Oxtpep.kXQ$&f+~\ܱg8AǸ#ѵQC_ ,c\}7NԤ8Wdn0j_M.7$j+ UJ|/q CN}x Tӏ?sO^fQuƂQOfק0`}_ѵ(5iG@J}e%(VǃS.H;>/^PoK&!@=&]mz~tM oD Go$R{o{wf܂h2 ۗ($K.6` U1]`omG$A;n=l{JHKuQߵ;+I#*V""UQ@NN}q|eK^6/RǕ3ҩd7.(*G@WB2アQS"#~|D1zX̰1J&i01ǁs8@ ,!6{)Geak\1v27U:])/[cs ݜ.WsV֦96/V63zzl2kϹ+l]?歷Ow]U ֚[n.7@LZ̮x3iQ3Zu=-*rɈӢ~iQ5Ks?ݗ(H^)' )DVWb"B0A)FnNFC#a4Y֗N_=r$h$~!/SD|LUHn3Zv`ە-iK/R^Xps'` ,z<I$ ") 9xÔ Y1^ Ky uh&om~7pF)6f}8˾=z[f#qxo w:2{t٪dlru[{ &pY?tH #YW4Ƃlzq^LHUZ.zɨ <=敞72 <8/{ϥJ_OlφtTO~iךV\~b_;u;7ⲷ }q6]zOZ63}wV%;Jt޻k0G^4rmN\^d^yHz:J6@.EDQa$Y Pء޶MK&H@vT+|kΝzKyž'E JU62HM@ 9: An.sY|9ܐ~ӥ3hg1(F|;vt\ ubDuz>ԩJkd߇:U) C^P'^yf1i|fO٧w?~ht@2θDλ:c%*uB A9I0#3,0US9"A*DB|v)E{ AHE1qK"C?Քjޝ&D!'Q!Wbi  `QD},$yҲnx> mMceWܧ&_ ?NRg.~#Wf^0ÈQ10S;㽆W9xV/u yC%]h5@d&.K(DFd obe?AQN7-BViAc-YM:Y&HmE'(5;]ҐUgVF{mޓ`]Ec~]#OĬw7K^򱭟vlPHP8z: beʱ3ݬEvd֪5h:>td0Nafu(D(G2&QEfgu`"R,n%D4Zɤ TЮi4`;GgCz&"7nhZA-?e<WFJdM~Sy̐+[O ٸ9HAgL'e'\V֨AX-F?7*> D߯C /ejem^xa&CwO#7׈w K-$ռBwtmkZ$k) Ved.A$LIb&Qy urْ*eVo~ =dt/gg~XBG:R{;Gs9ڣĵOGo |tR~i%,xh[t~[T[3_[YCb5qV M2k9H꬇:Y;*y'v(O ؀JGSrMVy+i\}G&xYR&b>-6) K$Q"'{{̵8[d.3 O+v?7Y}eJďyQ;s:mfOM3OP}\Wm@@Cb!j˳@tJ'Ӂ7mWDa}00=UˍO0Vƪ R%)4Z6$c}6hqTlKҒ-1.2l c6ⴘBz48Lj{޹\LϾ,7p*tYS>/YoNݗal@0LD]t61(62/9sJaDrVE G61Ȑ5&~&btN)ƚYb*QL-N x:ʆN־^4 Ul{7s{x,,fr9m. ]-./n[ٹLdL ;\:^tIC bH[{PJߏs%]DHNIDB9tN/Ls̝\v(QHKI$j11ٚ%J,U)+P+ꀆ#eflI(u}¦;h-?8N[f?h{h IraCTa,%VD㤶ň =Y3¶mm¡hIZ¿b7V2f:^Riiɰp֭mZwy%Il5pqo hc㾻&>(y=?]/n65M͇vJk|NiR֦W.{jlrR ~|Trz#jrM'LFbK-0 H0M=TH5}*ẔWTKVJ+36a44cW_B5ƒ+>Wda$y8Kge\t||c[!8rk6 佦 jbJ(-!SaˆɁ*P!lLZIUMrVl1}&Ҹl%E//~ڨآM46W:l, xLAf$NR¬ц@$ ~~/v]chwpaWJ]`4$O9 rFz|8 n~s v\H3SҤ';mUN!94$B&R~'BjݷDȝ؁!w}"AT2 ٢'P X $ƃBPY+ݽټn"w󔮺ܬ̩cOX\,yJ6ȄΧH@*H )*_K]KEۘ\Rm^d2lHZYIG0Z2I_ϗ5Mrꞁ1WXÔlS~͋KnuuyҦ,soqJu}ϯ w2Wd) :`ۅbg! D g^ߚFk9bBkʃ#:&'F/3&ՑdF/2O9hBLDN:jC9C:G[2hadIƒ5*yrRu VlDmTIΘRO/4nazc?1b%^~v ^ % <%BB3CZфļڊ)V3Ur2,IUMf)&I̓/)wbtuXl _YA =:S8~%Ir9 W Cɖb(*ilG66ͱ ܄jd`J0N`J'6+wA{\l3}Hcd+0KWޏ>vTS>,ԃ~vˡk>LRmK奌(y9ȢP"!&+|)B,&4$]*u+TLq΁I]WL_ڤ8|[ B4>G3ٻ6ndWEU~Hgdw+wTJ(R&)_rji Wq(RilsFh|%i}( :YmừJO]?ܯHw]b 1, E;jlg` ް9ksn]8h'*0v~l:HwD@4:&(fEɧ5``#Ay0R0~8E臉1zJY*r4zB"<(xD(Mj >]ep*VDۘcЊ6ND`PNfr9##h0J(zd▷ލĦ2=7:=*eVa8gtKƽG N! >)|ԦXT{ykU擬n ߝf '?{Kw_1z ̾5:J}1Zs04nN?~}] YyZ/A0 -(\AglIx4Jw>:X!.d \HA)cFpm{_;1u't^> Te= #`I]R +[ם$A?Nc,ׁgfsq;urf7El*(tzz29Y(XRJPJq[.MtSl6ax eh1sLgoo`]%*|4 ?#~ۡk^Wߴk]cT ߦ_Z+i_ 5?ܵ/ Dh#A% Uo]r/[emb' Fn#Q)|8A R^Ǵ`+E@ qctY4@S3V HO}0=6>{;ehI``xU:i@CWN-tFa|{s,TjP%1Ó"=Ⱦ0,²ZպmG\O\*l &cM/ΣYGH(򹖜N)bxЄ<85 Tn#Te&kXhrQڪTQJ sјU]ɛ*q.VmaA9~‘őri?Z^Gp);8rȍ"wF{)1>\)`w2'(brsPJ 5\YdIV1eL/ '<-mQ},g[b?z]fIGY&׃Ow'$O:W'U$9ߊ E B<:݆MG4uו~XmM*L<8q|8$1%r{s|! Yua]E1:(7 ^\@GM=\xnʕWBP|^A־jZȲ/ 8]a efS7Ls7g"C4e0`B\Jn/Y1}zgy< &iV[Zஏn! ! QAb0JU IS] oȻ>Un]Ќ. n~]lUa*L MM|~SwgJ#tYz ˭k.ONfTz˻lg*p ڪe{fĠnY0IY&=uR.S;N. &b;w8MOؽ8|״/6̭Ea S VQX`P,U8kmĚ9oh;Tϕ|Z N/x;$mcf%b).悧rc8!pM= S u z^@/CUl<(r0P=hzcUۭJ)6ÁR`0r.MW:WȜ P9Z+eVu:,CR`e$ (à+`(r6Є< ,8ߪUGcB)7jwq`\ 'Dռ|*aTL@ x +S8KHS*'n9ljKڊ/! 9'Įl \cĥ -\@W01p.FB9\%nypJa5j\BUbW˦p+kWKK+MAp 3,W]c0qUR \-^uRN +vhڍ] Wqaj7.8.R;jK>1pբ)prrpWT04׮/fi*~2Tr^fg d1 SNO'{_` DtJx2-5^+ qFǙ~P}Y,g`a>g~]s]7Ou.G. &cM/l\yΘ" ,e}5]u7TJrcڍK/$OWaW@-=Kپp")ō.MĭW%CWZce2w $Iy`͗nǃaH"ҔBX'ooANI> H)/6ߕ {6|1U1r ZR)obAIo!Q93 qי\v/LtœI/}LcauކiͶVZ];htmˀS iAlR\ 0,iUq҂vgZ{H_Arݪ[/M!lAP%899۔mv ,lOU{9]FEfZsr7qMlؗGi(& ]=vxEaZh͙r/zA)oVn粖 |eN@6OgiA96,CW=blR oiS֦;Z /uИ5Fov|Ī}hq鷧 e8ehՈls\umu.oI&Q*KE5ٱ]1ѭb}%K_[G\ økif?$(M\\ W~ `0t%pmps+^GW4CW(t%hJP ҕ H F]-JP2/tutBDW.D5vu%h͝ZGHmh6wv0mmtft#k6 ] ]TP.sqqtznz<]oY+E>w5 p裯u]97m8ҍ Pƹ] tE ]=vɄEgj}ok;{'U4n &o!EӀKG~4X&dIҴl$i[wgi gZDZ5, &edyV˴Qv4Hk=K`x"pťR2RNѥg]CW@t?墮N lFG_s< %; ] Z3v#^ tz}ceHؙcuGi(̶HJ/tء'\gWD+( / VgoٹoB=DeahZahZ)Pr M"MyI=&/Ny Tk88;L!Rk6{&,k Hte'~ڏBWӕ j+8X+{셾~yӕd ҕ p+4]%1͞墮N\$oG+4ƌBWJPe@t~UЕqS#=] \QJ߅3xtqE[f1ׯH+|˟[ysu}W|n֟n_zfr W~7zqowǞﳅ/n}ЫgmR>?wSȗbHNB-CW/뛿9ܽwO*w+7g5g_ijdxէ4n=VMUۄӚg :6rP]>]<; I!7]I^__4|W~}q#A/Try /U{go,e9uzohwW|󳗷̱gnv:>ICe]=34o/ξex~}۫Oj83oL8KAkf/]oG=9W~2 vsr1ߟ庈}\M7ǻ!ի^+Ak4&nQY?[H(_mu 2C~?򪤛//(oo>@z-ooDD.esw&:6G^7msѬ:OJoSz3 !~O,{4|r/ g)/%t~KwQ{#2݆n[s~B(~oy;BU|b+^5m/,gGu~DyqQ>o?Gf|`0?} ~Z!.^oOMk Hڪ_\[^w?&m݋[z(0;چS3 khTnlw!+ ߻kpsI=6{}5Fa~L7[}uxSrr'*JEFngU]=*g-%e Z1UuzkuP:Q5fOBJBmӲZ6\uFw+qT{; :6h%~\hn\}" TĘH't*dՂ6'Fh9R:;$ZZj  9PlbLF3jLzNѤhYhѵhxj 7TbwkK8߭"R2Ylm#NM#re=b!Y$3k mq}crѽ Eąj!SRW'@x ̑^] &ShSɕ<@{e2 IWDISCQ}BȥaU֤{{6;[_кG0U>:D!?@#6ȏ$Tگ.֠8 ,s.̱f >O͹?o@Uj5ԺsDAu@r=RNZQ >Cܷb8$HwSt-!vf8mI!!QQ5~B_k7 ҋJFXK}\.`5 ja}ɈX8pj6z.WP,bJ1U)PAV]|G=uk,)0f;Dr ]YR5JFuTP]j0IQ/3ޅ*]`eLyH3jS0ؽjؠA[w Z y9xC9v+Y %ʉs*j]vU' Š˳±&6r+]ɻ"גl ]ȆfPc]=^џ^6-+EqE1P k9+y&rBE(kGU((NU$ԓ.eX|/H1H`K(J J+.@:0؄WwsG2CYB 8(57EP TBeg d'X2 {0U %%TR ++eq2M!; @C:$X!AYP6{iLsw̶hU(·P9kƒ 9 qM0trG !.A 7)ؗI!0,'DW9(Vdb'!\V!7P`Sѝ*E@( WP|Оel  2|֑V qʙ1Ʈݕ(ޫ1j 1JuI) r 9*9MyC( TvJD2`d."2Db`^ C5Vczh2&2yr-ktwh7(3f#ǘ@Qc󪈴C:,'D_ {0bg?` r^:Xɴ*G/QݸZ|oM/k&>ML[oT^ʳP-*)CҕjɀJW!İ CMZ 1A9AJP$rDMʫ Ysktq7X4':AJ[SQx $nGep6YwPE@kK-"VwV5f5BeV]DpPk>ZlOt, A{D$h5$›rʥx/GoNQjw$aU&0l3p_l&*\U+ mry7Xq0=ϯXoj}՛\'$Y n=CnIf=)SZq'Bw'UEŨն[Sr5Hs%'ޮ bLݣa#*3bO:V 2DluHckD{sФD+Y)TP=`ͰaHȺČ*mAz|BA3rҩ{ZV=1V Vgo';hG\k'OnFt7(V1 /8eBآpnlČ;$]m$"OMEo e:dcu#NZ7P:p]JUHZ 5,ae%hl/=Bw!|?-F;}|x~PS@tQaa(ePZR [|[< w9mgP*k,:aA_vUYX2 4+Ԣ*HI ".<F @YV^~]("{3 L5]za i=-ge Ć.{ \ U`Uemӏy2)ȵ >I fW8 z_|yjbl>d&E`ͪj6h-#?b6E:[+r|t9^ޜm3 ly7rX./s.|h>R OH u:NrJgR@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ*)$@HjL:J A(0 ^ (#%)*46"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ RRR`-Q\˒QZ2q:$@N H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@:]%823#%"%nJ @i-)NP @"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJQ}ZǙ@ڊw٫ߧjnc7~P (7hjv5]>^9 딄KLdKWTKV eX(.}¥M4}gJj]v mUW# QSg6if0+L鑖#&j^o?g6)Xj&иzFB"VG0qއ>r̦W)(k{(}n ) jΠSkSY )MaB_ZW|1P]KyWrK ,}`޿zJDG*  q8"Vr=ڧg.ǫ GtJKeKKSuϽ Q Ctute&%4d Zo]!JA)ҕU ]V:v+M2th5C+@i B"]9 2vu*th6c+\J_N(7 Tk׮ЕڳӨ<:] L^p>*ZgqJAՃc㾰rx琔ɑ#x^1nvbz[ѣiH9nRi@:MJc)Ҵun (_s▲*_/'u.W2ΓyDm#Ej^e^(r%YȽ*x g6BS+<S, J'X@Ё J6+D!Uӡ+v}سt "\ ]!Z5x QMU[T=LW:w䵫>h9Gq J+MtئJ,Ef@8i%;(= yu$8C<Ǥ_a:C?mW;Ekߔ+IѧeanԲ>WsQzԜ cB2&+xgV{g N;|X*PtpJT1P:ItutW%DWX3 ]!\R+Do]!JN4Sd]!`c+뒱cft(':A2J]'0tT0ǥ:]JN^etAqu(EՋЕS^+L2tp%KD rDW'HW^=@tKǺB>gz!NWRQ CWfϦ7x$/`v=rU/i~(D|]6=Wx U1:"g6cv$ "]Ysw:]!\Ru]!J%NEg{*]`E2tpLaC+D鈮N3VԎ}:KxKDCwCWvϦg \Av|g`{luU/XB~((]Y6=Agpp 1bf$͙s]Re2g)sI g}Si܇aѫ9ygW$j΄ʖ2霪cn9wbxgWT3Dkн3D4yg'Aev:fj-hc^]ڄkqU3՛/eu}ZnY }\%?ۜos?OGHCuo:q& B~Ūiݛ[~2,&Ű+Iᮛox0p5Y]S1f>+#j2a}pۼA'@]|8cxCLxy]M2Z QԌ|^U-cmHkzDT/Gfw~m2݁8-?\9b"!^ϪZ ^E! pDd0(*]XKZZE oDD,f*抢f~SD-T:-:y5ŨCWٷUٽċO 'N|=\}6GX^l2i&U_ڔNq 2w -eu/mpQN0 EX] mFXTպ:w4Nq9! \? !q,2u,"%#CE)t9Nz])Ye ZʄW ۙ$<Բxܚu^qθ0go^զà<ʻi/_޾ڬʘGZ^OsfCPՊM\f`/A xU|4|~]˅ۑ#x܅y+Ovlٸ-;&'6`۾r6:M#1r`rG^t28|d,U~Ja.Pv&x]r(JW6UX͵y ]0ͳx:^]~kv; `˘MfH!t| ?Z gYvi_ӕ`ӷmtim~:ؿl~ &5РMj.-: x2ޱJɤ:Ip7f`*.rßMmg`: eŜ+K M^yQl]XA %\~g] z`l2$ٝ{sL.UHjd*Lб2,:E\YZ4%E`ft]{rG{ K ^k>D]:rn4_~s] ߙ{;oMʢig8C`飐Г+MC5 E]`n_r5EU2JF,}!""$X{\F^:UZGWJ[@FXg*.# 夲AJ ܪZ*ec?k0800J.3RԅE%L LJ2Cb1 gηwslXŷd۞[lpFw (zGyt!.sQE,)u &+WXX:r,Uڪ.5PV 03//QN!ޘG#RZ)j!M =J^+JS9k^(/AEMsAs&GWq~dH~kW:角{V[.rlqqfa/q.JIQQN~`kZ0^Fѡ=qmK qD_g7Ŭ9G.mu%bUhlV3sB 0 E4ޟwEH; I=,vCی.(a҆}.5N^FgW2ۄuaQo`Un̽~'cؕؕe7{9}>|:Vrd\{ڑydfWfv{\[ ~~~ oej\z' z"Nj}Q]TъN.g٧&rku{WFVf_eq9h_]٪N@ӲSՍ 'bȫXfEmm :˻^jxj&~b[U-;u֟ߺlWMwɎs[]b~b~OGeQ}guqVoΰF]ϲ??{H {%u;0g6ĄRJ3ߔ/`lF.墔UL}_*/^DC>y}[|<%o[fq~tzo,#' U!/Z8Ζp%6]EZ]OFGOY !ъn|d9ޝ'lme gޫ,R:MՓ[etv{Y4l%cR`.VB)'QJk )G=q{;/vZ `x_a4]r`nj٬պ!2yQsu&RW KMA"a: i`:S9tu}L뷕ӻܸOvzo&=T\Z=Ƹ<7r~HXta6rQbw!RM~Or_)gH't<.^R/ 2|g\@λ:cy[SbȔ2.S'vCdi96E) RJ 'k[tk(RdvE1˂Rjn1Rimb!RtŅP@D(>brfinݲY>{q8(tm~/ evko2W:Reʩ3E5k!U]WG|ЂaGNˡ$ y YNg KbTE:2vy)Ζn-KLh%FPQ޴~eqޅcmzL[iDGUKZ5Qk]ٗv^t;6kTo]tZPojة7d҂}oZ74YMVCѽ5ѣݷ?c\ `e.GQwUoL^Ym&6`/ 6}tZ9ubm~&Φ 85 _l$w'_4 *XB+m`J*͐AD A5 m+N/Y8ory=C2:vМ,.t1/u/Ǘo}w?ZDÓs>.e:T 6:l1#Dë s=wJ PNle huAHtXTr9mRkbh^ 55H"u2A`:T1ll&4} ?QU_~s#)^RrB;&YJh0 V$㤶ň(=ӽ!:j@$ʢG1k[ d"@Vރhb8SVWTZf Kk\hg|\%A)c]LfȌ %0NX2I$ F񁈹,jr16o~Y c_,bqE8Xĝuwv:LH^:LYD1N$O%̼-%DZ%CW#k0a ف*`,Y4I+p:: VchlhsWl}u6%E..vqvmRQ'&K+ $ 9bMȈ1C^g)RahcDXx {lvj]c{hG0a{,{,Ƚ7 .(QO_)jI/O_}P dg-@{li!'Q܁4ScC q!w } <) YBuJlT:"cEd1 E\=) uf }L0 -R"(^u3q9ØE=j{x)ߧ kbz~E5mnQz^$W(Tu?tpnP e&|6[wj;e vb6Ju1f&HvzL^'Dyl&z&g}W `*Vhm0 XӨeIuBI bi9&QBf'p))l#s:jUO)J&ďkT&RV1Jm&Wr: շ(yZ%1EOYVQsmCE|[3׽Ag `SI,g|PJj23: _S]5xJNF"5ktj#5Lp d7$XՕw%I1YU~Iy^-&F7~X:hZ/ל L#M2# Zj'\) MF[)qб،Ruo][! mUg|%w7Ό.O}E~xb3R^wPU' g .y+(OY剛{~L1 gU,e8(L,-s͢d&Ύ,66n [oiwsWw>sQ+uy|+A`=j)UK*ofYLR -e>bKJI 8l4ݜ./ـJ1lȩcy!Sd$-@ώTdY;@ŠŠ~Y֌94(mUN:\QsbdAd l$k|-X*UJcf햨Bfۑ%IҤ`DM`2&mhQuPYyBԥ]~rM6~Fǻ}yҲ|[Ǿ$̓3?D 7@a}l bz:Tpʴo9/U䄻 d{})4I9At6x튈1c?h;zcU/`l1b63H%x1Ʉ F3ӥhk9mQqC;Rd{JnUFypcO.R'%굇ga^0?6~hlf)(>8mdi_(IRFUB̃ MA1X R!23) 2,>))Jbn%Z̭uA+qk8l:*n?{)ުKseb`,[Vֿ?%j(6z:6wE]%.˸+QFmnVБ:2U|N%Z!Wervtkwx͋%*'0*E!\t`yK-`6)`v'X] X%g-?9*gI9gtIr\q^B&` _SzHRdE M Y}\&wcbmUWT֧Lbh%#5)2<5{\dI.%BR~2վHXq(++)3*'!RtWĨD :߽A3(R[E: I] Mr2ZТL$coA+c\]ni]ELN0׹:?˯[?x>[}ʏID:dzBj>]Ogj-usͧO4OoEY:Hv6ĕV7s~u䪊/:I';unN.ˆ,XZ]oa>5Z,n|JO:\_Oyqym5F{S~;wOOWpQ\|WlGqN[a._IΗVg9L'u]~wLmɹ#y|mèx9YJi2]'~)U|:߬njy=&^$;%W>ulxtꥯt2-'l?{ƍ +/0\9U&[[u\J)R%ʯ"JM$qmb3}!Lfk;]-!j/otY`Q.7dk0YA\|e@hrq6NaE+տʘ-C4S5p<<SR~~_?7݇_zK_\@K&,>X>Dx=njݚ0r[9羢7ܗYYlMtiG6%mXNS6ʏn{q$xWΑg?:i u"ؕTrFz;j*Y k=Gz͆qiZkO.@g:ggVW'-DY#tV7ܔ$.Xdn#mf}n.tp5nWtx3ډ"@UOEQ% c>lb@HomD h$}Lna e,=N&;F\bk}# i.]Y ?ӊS @ߦZA݅~ӫ?'M}ݶiJïK~v[\hхE˃W]htQ6- X)VYŶGZτ ^?U̬"0wO*)&x#H*)DSʊ8=zb:Zg󒸧[1[. 9*-ȓL'CDu=f0 ؔ | { R&h1_pٓxn\5qe׏jǗiӆs׻1m|y.mHds=LgMvqQgevGUx'm6>*ڴisWKĠ!9,nu;Һ==tG6;~l; lun=ޯziy^=z^i}>-n?'>s~2$g}uGΛpGl|g[Ss1E}7 6O<:>)g֡e^an=7nSʾ$x1Y\EZEJg~Y[ /\}1J3i뙫"rGUR\BsL  ,sURUgUR5+ "^*S)护5v\)+4W sUƗC]xU:͕b6dr>yw(zwsWGsN^[Ǜ.Cbe?cu8IƧ~;Nρ?\dD=]೩9mWpL,DՄm#mʍe52xQX_Y>d}`:B'EcrGkbLj` rrE;&VwL`䯈ב—!x~{qP G^ -4A1}VΊ90^`CU5ItfI-F Od5x'%A.0F,2*+j5qv+j:) \wk_m1]r|7GtUޝ݇Z]PI"ce 9iO 0S%=YR:' ǘQ1fRF9ZL S(p1fyt)D-ĕxm2*laq-m!m[1+WivqO74 ۭ&Gç[wYJ$q**ZK%cwN8c ^mBˆ,1"g{.x1BcВQDn#fv̥MRf_-mdW(fWv-z#m3> 4(28H%#v1bWaq=PfULb"dۜϓZ~ɧJƖcUy1zn6Rk8BjI3Фb#ADy."t;B>#vk=Gr*:oUjO)zNfz=1u{ͼ6LҚC9yHDR7 , aM:gYQ^:c)͵"o[^%נeAv@ہͳ|Nx/q >oIk)Lt+壹GGӧ|3Ct䕂<e!F*y2(_$Q$w2^^\&/+F44'fbGJ ݎ-9 t#ɲR Ʌ @X)"1V$,ѠTݨ츍d;QxH#`duυΒtN3IPQ v@g $2IXօy)u  { W@6i-g«J%B\8jpcwd'Y!ʆi6daX =}BX`,址VrJ(A &i6I( 'rj̖יƵd"@h =~ 64ʻS=[ظYC{OkI?Nŀ:dkE1:Qe *{hk$#⧂3fOl0I$7cIV;85x-b2̛˖^~j 3zN1KôW%㄰]]D.*dΨ$cCXR5ȲTVs.}vjOP XEk^Қ O&'^˭-: |,ZypC/$F.I,a,iY'jtlߍݱYsGT" рtȤT/!= #EHZ4^! >b.IoY5QYޢD^$2h|Ixr(UwSHބYڵUk8!^$RmHƂLDD(Ed5ʫ^*!yU9U³R'Kṙ#XSjoR*F&0AHC`b}^wޙa%INi5^8bn7A pgbPcƱ-1-{>#(29;:A-ptGGK2gGidF$*Z<8]*2iJ˥0|hE' J">~|?O\Lq }nw?kHۢ}m/~Z7on|5d,"2ʡQثrCGE։R *cniz0k^vꟳ+?^]?x (R3e8Σ {lW6Gb'h-(m;INgn7kknVe/I6 8&U'9xOզ.\%)dgk63CȡDJ{HэnBjװi} |LNAK5ѣYxA{YkXl~Hꘃ<D(d%8 &+ID ~7d^y%X.2_$9 &O{̕$ }KӚ 7!3׻MW/U?yë0Q^_~s+XqppTԱ, ]󶺆b ƨQޤ_ky_ 5;ܕ/ D1H~P8^"(ᾠ_WU@ݧD͢5>Z&e#`m$*\'(A 6l(]1npN1hPp ;/*pɋ!^=@n3=}:Fђ8G<=DL J'( u>V|:0t}(ZMs8T?nxvF٠9ժ.lqvD-w>LqύN&z+y矽&X(ڜ-rm|SMA`LvoAvHFL"-_NQv_d)VYJAm=WtKWqGZ1F: AnZ{1[,AQ2VZUSJ˘F\-/!%sOx䢀ۋ.p"P( 'Yk}OMCpۯh﫮֋ Ur%is] P^U˭4v#a˵qT攵ɺm3wN^8RO|&>-*G=oAVy)!#)h !rԩ!vbo^p\Y-ZRaRqq $(D(cր&#QHYu2LwZ D佖1=&ZԤi?^΍4{D528=C4ٛ?_~ IS  &3O)\ٯTB5ƽIƝ L穜;OWc] t YdIV1eL |[$rS(7UJO 8=[^6<^vE?BVZ}ect}W D9} $1-ly9r&Fbx);,"ȢIݹ0tNw V)9q{i@ R ) ` 2ExN,Rg2DOIM"m*S, n; Ѕ9}h6 o=$ G$iIoFɻfEGjjv@/9Je S3Mr{<00Ru+,w "ft4:u4{8eqނRh9@Vvg0_zjC< I"cXEBEdk ?NaqQiV CV']&?_rw8}d8]ϕU k䄴FYJ:.:R6 # !Єn׷lb.5{ц␔MLK)j۲ V%3ْ \6K(3!P8J@AF]A4 CA^e.>&F; ?cF sP@:E \w % t_P61 ߩzF;0GS/+KjvKqW#Dhf'0t+\s96*fPixC0F V0jwdwHئ|;liC2+4v^ȉ)S s02 T@BE4{j1TzBNJ_ yE0wU$r MjhMnFsd19ۛv(+?o-(Gp0 +$@9  N9ꔣU(-LS_ BT"I 3#j)L]v~ Z)j1ٜSCY㫇iFEE ,bmn 'M难ϓI`ן@mdeVapӲ$hR+R3Ey 1;˜kXZ]!&85qs+֤+{?{ntZn҈櫤Y H(򹖜86fks[5f= V=jK:k}&p=NFj(B lol,! ?bn>chiS Մ) ]r7sSW]T۶I'Ŷ}GTǢE*ұj1*^N1 0! + )vO\ )&o?1F&=!x^ka DZzl@bM(SVC H\j<pI*7JЛ&H3osA#Ι5 uAA!+;0sw')ٵ_\0U"Pք, +yF:H*05&{|9^-/ M5&󱻦Y[߽ZaP+mi-ɟ@? nԭWX)jb'M L0˖l˗.ya)e6 }Ԯlh{ps/gݏ]ȺՏ^\{bŶ]z̹3̧\]R2)Yr.cMf2hz!~kmƲEh쇝Ea ]dE12 ೭,9l3U%$d=J] ණX%yyxyyIPeIx4-zhta $B0[%Z 0e50-0=)ALEzXK( }&83LR.0PϹ=>&>Wnsǩt{2 `0tƕxRP穀rdb0T&< ]H]orӿ3):&/y~cz;,SCTvL&"IB&NAfЁ&#o( ^֝%hˡ7٭BF5E,% TkX Ko KIu,%IDR KpD[WI`[W YNr|p[+&BEpG/M\Қ5$-GW %A[+.mڲҭ$.m LҊ߲JRV:z;p%$&MkWI` ĥH;\%)) •T"J+q5Rm$-={JRr+}W 0#q3O֬]%i/;z;pRMkW n #J[+ ~3pV]R+ujM`!OȰpZW;JI*+"mWt}:V!+gwW_a|+;ķK)pz[j}ژR3J0 l؁yb#ì--I0DGCϕKGdqY+%xAlJf"@sGU{,#\[,$-Vn%)),7hq f3\%qh \%i$eųWTc+5|&$.Gm$\/a\A'E)?yAֽ\1 4xmO)lM,X+=_m#/wnrʘ3&w?RznN'͋.f5-6 ڞ6)*}Wӟn2 UVkǽyJn,1?3p9rw@tz?kC C!fH ^ wE%K __K‡Ő+'//}PA_Trjixx?ס{dCqAr~ySŧ !݅[99!/v}7 Gj" E\ֳjY"\]lRT WZJr?1E]u^RrU/=XLrK ,vꝳ_ƹY {ЖZA0]=~p9ch[)Ѐ/+ThR騏R7OP]~N7z ͠{#`l`G^.,(b.m{1G\fY\ރtz7D^ _ih2;1{&)`R!Iw‘>]Øʽ#YgYyMJ:r1'Ra"oRIh֢=II\70"gd[\jnp(fczk0g3sdCp- ﮦ$B~ǥ[khrY|Qq?-N{YA9 3w*8 Eb #z!Uxd!R&R/5eDDL4%(H'gek [-_Ii "y<^p2M];z;K0}ޞgL%% ^LkD{/7݋5&;!o#GXZ@lPV/<tdQGA˥w)!DVA٭RsΕ<(#ҀKAS0 ExN,Rg2DOW0E4^JJx C7Ɲœ=CI4 GysBrF\ fK&`ɻ뗇jrc؜M۬,`C5>a<0Ɓ`VXAE̔ht h00bi$N'h9@Vfg4^zjCo yTkN#**"3Xk0SX2V8qQDIx< O2׍$gqybh*S5rBZT %N Vk000&VCκQܨ_,wf.5cv[ !C/ Q4ۗw!{ OL< xMF&Lu@w:Mth,g9;_pTq\ 7~='p*d2NL8:V'H+"I]E%~?+9mb6K]sOA=jdH^#{(rDl_uĔȭ:O+٪{ ">E䣻\ޢ FE㦣SY2!{V.#Jqˌo&f˖-ܠK'5,_D yix},{psXGtPwK],s)ɺf U"&pA zu6y4̚sT5Ȇ# \sZ˺t'^h2VSg5dn}z4ryR74#P#JWIrl8y!4ǫLrge#74Pe;d 2DŽBM$"؁yPЌ0Y;.fTtn}dfSi<Ms̭Ea  lS VQX@, *6b͜7ifzrf~9ihs 7<Ź4gsNq" ߔy{[`a֬D ]yP|Rg:瞡u@Klw_]Ӎer׳S@wyԅ]'ol.<{6ya{˻pSfaVR|j>D(!D4~~AfG #N8HɠmL&d\s?9LR_IJqCrǔ[.u3m [dXE\nXݤuX|nv =ZYK0X`"k"QK@󨢚z6xΨjgoB#I5 1[r%01 nAQQSr!Ǘu/"<ءTSl6$02fyI"(hY=9O>C{wT;=$:m,>T kSϴcH2&DJM6F(oihf\nyٓHXG"\JlB Oc*5(p-a`5Uْ=l6縲x %I@FZ[C,fg.4LJce콫J3Ȼ@a4Kѣ, i. ᨅǞkƘjtpvZc KA&^*m $U^ywۨD1qv yg.n` ҚxwEEhbc Z*̉G"' ncM OyozМe":FB1+LHAgpLڋBQa΃31*X<mF+NiєiSjbׯQ0 r+!(ذ<-̝haNan'c3/Wd0LQjqi5WTK̴Q*+g6(Y8S;ZX|[L0j$8mYd6FDIR Pl$@< ,8_5Ji)Pʍ"6mȰHnqGkZ)$iJۘ8/SZ@l )i7nF$KSkG?sx`G Ժ`ύS̎s rţ03-c1OF|j 10 f1cP˭"01bePz֟.9|]}p8N {qh~Nzɓ5<?/S}}sKqP 'UV76|>Ϣke06p Q\DUK[m]uf5N>-CX0v{csJ(A &i6I( 'rjNo Zvb"{T {sfۯ.nZOCdfoCc{ﺆ%LJ:E(ԑLMfX 3(LNQAJ΁ϙ,V"jLJp7Y,ցKB:2Tp&p"ɕlcAVb́%3rK8xے\-=[z&rB-6EcKn+z,K؇*04vqfиaSwv6o]Z' ْ!r[Krma x. hPf J"0XE̼g-cJ>v„z-X*|uEm^$$ěVfRyoA<-b˄k쬝AQe6?(}Kg5GS@^]dx_+jY1klHcbfj$ (*A'GZIzB c$hHc 3f]]Py`i \C>Βv'Ͻ hzl`dIdB KKArտ]CRӓDړ^s'_gg\rW;֋׋^x\=I J'kp99!.(cc"hBbFD`!ը|C}чq]!ve͖~_'t[9A6aAp}E?ZpnyT%]dW9WFFi#j-e@H- r&S%XhR-4OE$3)~B owZ -B*+e)Q!1f#a)d"Zz2"Le$sLdb[Y'o֝}4ӁfuwypP{'FIoA u T!91iK%r \E"s^u+/u /&%Lra^/A4Ju.wZ `l4nvKo>j2=N`ϦS!s)DԒ*m_p ]mp[>g<}_G^j8%Ys1:*\h{H7cGj=:rC=;kw3&>d&H6}$r !ZIqB`gaF2HY`({ڡh"LAHx4ʥȡ5]1}657f5lmg, 46y.CjKi3Ե}&jq*{nqkF0H92(ꐃKbrBx2YT[KHLv*+D*"sDzkRxSqƓX'Wɺ-hY:>3rv"_ 쏤Ϡ d1CS1Ԥ;"bKiw4KdB]dLZp,v ` -d }:>Iƹ*jϼ7v "jCGdvydJ0(PaQ+tb0¼\ʘ SĩgSf08;!!Ƿo~m/?}|G㛿״~GemUm$X7 {pm追CC #wX/..2_IIpbr1Yv|d-qru闭e4q+ #m$:f';դ׊/ê>liv hn6q- 4g3J|uNbxh3ں%eRb=lzX2 'ŨpTևvw 뫴ELP-wNXf2OHv୍(b!77e՗IVkbSU(hIUSjW6/#mZF wnm~nޏ>_F-~[a3%Wc/n0d =X6;6w\m ߖfm˭嵍AXmDZ#҂ƌQɳߎGH9ȵiߊZ 4YoG%::};zI/u"G9b~;m{A1WG/HiumW-,E෋O⣣joe>!O5&[,5fVVHQJεx<)Ӷ &ڛ-츊Mmg 5;յ:|u7^ͅaq|) ҈`dDDQrKvj4->jᖎXЦ]跦"D}bZhV@1lV@?3:4]P՟\%Ѵ JNϿV'TogcMޥ\Z˲vEԺm$ecȞT+~6Pk T#j }jJ.{(djuGPwm=I~|L^"2"[xV2Үhk7n0EݽuRP.8m*9e:/򰌷y󚒞˰ՀS{eO/4epA[` .ZOY8mNږ;=.+7K]lH?0F!r=ܚ8|Yo5U{ڡigچ^AXk^UZjNZJCOGggxk=%PR4F9kƠ"F?T&&0 ?jYOQS.s+Ni=_6-F(=Rgb0Qgܵ#WgZcWgJǓ:{j@{W`f̾UVkiW`BGpRU3վXkMp 8{Wb0\}f-JdE\Ar=ޕ3^7bYˣgWʵ \!ZOO59<#pspgmZʛsߵ콏?]% m]*~=Y:ϰ'|Gbb::7nH(@59\U-C\=ۗՏYgU6⅃׿__?ZeD͏{i6w`-M\*&eD$_Jeͺ9p)%9C4#!hAn~#0Z ICX 列c}x{Wdxs,2Gxpڗ~-VߘE[ʦ&@ >L9UI1e>evr̞Y <xՏ~;fհĻf_eSzB2wFޚ|f?#OYkoeZj%KN*l?ڹoy_rݧL'˵wx"آ%!Z!xSczxk~Gg- \KJPkNVˁT5mGo=Pt-I uI*Uܢa>Y$8 &T5EPrвv.I͜K &~U+ԝ|u\v˦cogև??wk{%oQJI]bZjj'1;`1RX+ly825E:\$B, ։³+J@ 寺rDdU#UvDO&2hȑ7GfQz#bZ{Wφv;:Lw0keZ/m1QV&B%;,t։ҵ'WsKP6Kr@r6>C.ӵ ٬63xZ<S{jWᮇk(Bxoۜ֟llKc[>bKQ=>6ߛPf.ӾJ4"lJ3evPiB#dt) r7Hi@pQ{4SEv.C,ELtz认T{+Ur6HUmCm,u.&/lÛ7D\!c`]c:@+j5\B6ޙEsn BU3sfΖ-cH?)hDNe><;?4moaf"w=\/!|Zs>_r,ɔPTeOuPLkɨK AZM29DlVMvc |"*'#&-d`*NU =9{<E [zQ_M1חIY\;,B^p<}0VyپZ'N6AI÷Y7w9y1ʉk7r6Ng!SsD;2T~xGM.$,@J Zo9-?i`'EF@@x wEFTsBZbcv81R>QߙkVigLꍌyqnXmf슅3  /-jv1 Voηӆ|t~ξO8>]rvJ929sl1{|lcPtZ)XC%9Ӳ 1klZGɁLgjRխyI^cAnc,m'Ԟ$.1@%zn%[ExLrC$a2`+: )2j,PdXfrQə"uBsŬBp7sù_U?k8D6?vED茈0!℈,ZkGڣOl"\2! 0ޫZ)nK)F]ѳJBgAQ6p&+N-(idiHFgDf%4?(85ndW\tq'\pdMţ1+ k!Cj(P" IZ*хuŧűaٱ+RgSkLt 4Z`)9d@ȧ n܁֎-r'D;?PdE)_H+EQAlI!f!ȠU{iʬ{uYwh]*nA[/Vw;Ϙ\G.kUdIO+kQUTVjzI>JANv;\/&du,RotMNkn<#u7gf tˤ]_HCޣb} |6sg |6+ |ܴ*vj{7>,c7DG3@BN1TH捶܆RjP=/W(Wuچz1ۼx1UY&,z@k, .g6E]d[Z=Uj.$3*&3]9[,Q2]GfQ;U_ӦZ>u|96f6˯.n‚Ԭ&޻TSKN1ڨ!EQ}lHR1*[ 5[0}Qܺ\ zu3y]Ww|F7sEMA&OX9P9j"(!`΂N3EgJQ%ySe*ٹdy)_~&ɅǤ[nDP;*/X S v[1k 4.+wfNҸ|̿qѩX95UdJ%y%irx-)UIՊUUm|lD>v^R; +{ݾ/.EMldܳ4^SFstvюgCo` '`&  Zٲ!hU_*Tt1M~kw/y!CVN}(hvqp>6) }`0Fֈa>Vfvyb˺Xy"zܹڏPWPWȲu&3iDp̭ ؄Tk\PCnKrF웵[Y#15&4搜b>&BcRZ7!m;k9[{\+`VLh~g}X>Mںv6%s|霧߿w,R_eϠ&rO9*`NqzVT|`LCq\!g<:6Š^"Nbqz~ }3^_yOr蔣6YR6WU*D'2\]*쁵|BSQأK4h^Zz|pð\EgHy&)zIA) .FYRaѳK<& 2Rb|Tղ}3dWr2!B[g-b=I<q,60+ꏋP%a,3O]Ys+Sy9uPV!*ICRaS$CR}S13\šHjhhFw2.M؟`E0\1 0_02 Jxe"G]ʍ\9ՉOz SK,B=*rrFF` Pw-?-f.{<8jQfLE {J*猀nɸa DW7sP:Յqp*ƽK|v}g_S?KL79ڂ1zMպ`]V& ƳL;{<仐U~T7W/NחE{/PG3J.Y +0CC"֕bZf4 75iΫgJ`p[Xq (La:p.8O} ]:{>fSD1m/&URݞyr ZJJ3-Q)hrRrM6 >q`ׇ35*0y-*œk ,\TO~(|s;>8WV#%SbṅN/v;WR=9H~MBwu7n轀յ$emKn馮Rی"yfy և^O0i.u?Nz4ks霢#h}NֵZWhHFɋ&k$5́}]v~̒8K7J8P;n*j)~MϿʀ 4iN,)=!9ARe~Jcޔ;UUT~*tU-镏3\"W/xpͫחo/_ y[U-Mo!@#5[4͛jڛ4M۠iu5KrM-K@HͶw E:MH~PDP}Qt 5>Z&^=X>sJ# J: [)jW`Sbl2ᓍ ѽvY 6%q0'y DL J'( ܝt9PP㉝PWB*cNv;wNO-U' H ;1W=M>+V૫0?xM᱂IQN)93Z'XA`LqׇHz"-(댯^UXHӷ.% T^GDIބ5*CzqDm`^ wE%KPUVUdAT2W˓:d34G׿Bې# h(l y;ʲŷa ).ouS_޻^W$Wֳ Ur%lÃIɝ4fC FSm_sZ9oh˘i=E,LմʼQgU_^5NaBɐ B4R9tԩyo'={/ ەʨ%5.5h G'zI"HP(:6a 2Ä x$ )[X1c2b=6&EIKD޸\e;|Pnr&,Lc>Pu o^us?gD%m.P|`"T;5jŤcn@JOa8{*y xc׏fQ!̡$Kծ{5 ?\yaJa礳>o^uL| 7e{h.K^s$,IJOYa$>$$$/k 8 !%sY"7ibX`2INgXćiܥĘ}~lS?{ƍ?|pARgU%qLY+4 Kt[$Nlؔd}t;v}rwRna~ԙ?_enB#,-g`ld 6([L#",:IHI0<`Q4`vƜs8Hǽ4RTK@єxNiFKp3"اHXKI I 0gP 5 9`ZoIYC#fHUP %>(T T}Mztz 3 fBr .b vGcXG]'/4EAdONIԊ$jcVPZ(`!cSG(4+~41XBZKx?g;?4wNf"D]ӝ߯0K?`.vexC5,oBp l9N~Ɲn;{y;L*aMKIY'Pk}v`{KHLVgݾtMb,02X m~[Fޘ#X?S@N5jõëw|s2owO:[fDX|>֨ (/'~`Th ~ )K g-g =% D/?{C4qXM>NS$j)/ Ϡ]).i_~[Mw[4|(\v͠|yR$f欽+6Bhg.>27.ߜzY!THѷ"J ;C^.G .;6E8-E/ta;W׶?Q7.u6~1^g gy&kkf5=P8

nx?~RּӬ?Σh…~:"5 Jg pX[G0> Ixaj1* iЮ cX3 =6sb;:T兦6@3jb w<&Y=DPE= u$('B# WN-tx@V::oUY ], L*`LΆ@hk50]7fX&=8_44|)sY'W ߗv| ?B7zB=wb" ;D?[=1bLwf:ňKO'7f )/.drSςfX`w@um`7PE89RQ _IZ+okVi4H".v)$M%XcEp"!*ypq{qؖw?n1_2cüz+eכ(Ml<0w) 37ׂVm1R5w_xB/5|06 @J0,Eic~BO05{c8FrBgڀZg_` 552K D o |4DESn%cU36-pL՟ }t}su>Ƿ=a SOaiCr{2ksϕ{8݅k :!gWBوR-Rb\0֨2z<vc%hC: QKb,lR22nl%T,݆YY@onsV~j 4U+8ŔK#|fI93/rhkyFh.O{;^#2 &)0@(u68U# %=""Fc m V`@"BEDNX6頴K_c5 sVEBAxW )bҡ̠-(TshUG636- ZOg~?)27\*;eڝr^%TRH}`TH@JV> kO5LҩDc;Ylr|3[%Ÿ^ƛf~ hKS;'S>y^\OW15h]|֩Z4[͊˜I=S3]MS{Ew/nϴf@tbmHOX= IM @ 8jFOXsZ'v{T+^{v \ɤޚp6k~X.63 mg]hF]p9vo_2^fvrjNn'&ikß5移wٱJyCJK;ɺ rT1/he cJUQLlK.%[AC#{1e5MmZɃ&dj Rգz g|b6;EkQkvmrr."bI'C9b.Z`$C:119 jl\TeGQr r.$`'m8/5Ÿ/xFF8ju3F%:G\HdBXJ.YTu1 oKbp I%X-Nc2(Lmz3'h $LڈRL;\pHjzRU0}uv%Ջ^t^dM".9_ZpB'] @" IZ*Ȭz5n;C<} *l"\Lȍw n3Npy? %&2 rRK@,SoqWDݹ7~~6Vfy9_\R w1g.VHjN}uۿ7Gn{wڦ, *ꕵ.Pl\@DN  &u OpN:P5dAx*)hا00V5$w&%;Dl>GW=78U+=t3 <_nRY#b~z6KLs)uϻrR") r>30Dm-%q a|]eE5+GBPR1*[@yk`wjH,b]|.ƅ3]P݆Y1%e$Es^^TζJ.gѕ^tgboJ9ZsD*`S.~oʁ%_P)H3r8i1i f*uCXcmT5Fn(-WӉ1L]WI95U(\T#YM +%XmQaMe_ly=qm_ǖ $+ V 12*0QsSS}FI8V` +0QpZW Lez}۶'CuaC8t'v,2)١-8T6˜t%y ;cTțBQW d(%k_s)jflԾ9mu1Te0 2%f(UÌߠGeko_n~B[Z{DfxՇKC>3Q.QEҋdC(X#h5>^b JHJ!dQ\UTMϝO] - -[Rs[eV=/P *ڌ, 5ˆeK

i8uQ}/mc:kluLokoig9Jپ>y{$>Z~krҔ r1yžy.~߃xMoP^rzgaPl(y4i6I)CMBAFS*sx[m:Io[Q< sU J-Qdp,CaRTV-r|}v/8/T`ԣIc0>[M[4-jۛ74ߠi ]Me]vڥñ 4o@LzT/ Axߖ8ۼMUo^hMl#^W@}>S2$q7dD9R;x+'[' }Ͻ0W.e%=>p] F.Н"Y@?$јd 4 TP ^(qO=i]IgTeZx]ІH#B?݂wIt`s F8PӮma1P"j}r]&U] B],6T\_a1Fb@NbiJ 놾my?0][Me(yoaI9ي/ַ7xa~Nȫo`q|[XSKBXi)) }, !{^&AtjgW&i Q^kjE4[q]/t6?_AGUo|8jJSzTOwN3we̳iriHx.;Z43w̝lég'P s̮Lʅxz"Dң+yRjy*YE ZW?C=oMK8aQ1oeӔD-#\;Kc|G>WFdǠ\'mwXvU++.Z Gʐ=7ңȻ$('VEvf[s[;>m&Ne8{槳&= dCL^|XG?gSBgq9(R8 7!x9gA*%{d4ޯ3?Ư(">tqP[u}WrBA9띢*8kYTvwu|FZeQ(cJ Q+ R/dj"]w+ VCX"陘Y|V;$AqHb |(KziyοMty;-B?6 x^}4Xי>ep߱*Tu](#:tř7x9\\ww\n=3xh}=˛iJmPq2'l&јl"naپ+n|2]Wv]EjFnn9*|u3.e(דI&L2ũmPF@秶oWs3?єj~smCP(KMNUi0 G?Afb_ɻO^hT8{<]v\Re`~bsˇ=4n`sK *E P)YH:*<43fbyV.nM9h 180iu`j00.@{tasa\4qSk!Mm:c]A4rte+5TT2ʄ5x+?( } vo8(މv9ST*{`GU 4:8/d^l,YjV8|[kU&B|euLQ|[|dIJHsmmOR9QX$#aqݩ5xՑscHRPX^t KZv, sQX&F0H+ސ)K9NOhiA!(Մe‚~hU&0OC~qBsA;T*LJJj(\Č%"sJz.mD%yhWJt*vf:!Y)r-,ϒ{bJ')yNQ̍8rX,s+HkdilHe>bSs;%ygH0z6Y&ej9=t,S);3RJ])3RWJ(|ԕDRn@{P] -J\ibG*A1jML;8V뽹Bi;qtZO}[AU>/ҍn@r*/s;(dy#%H_:\QN7ގ~oinвf0uڻRBxWDLtW]7 ]~nUõt78)qUU+[* (Hn`V"8v㚟mީxh^|`2;9szzᲬ#L߻G8-:ioB\9+ P@EpL"ImAz%Un o;~{Gg5j'?E~2怂W"iiN.& IãNN!\g`4($ŷ8C ϱZH4];S:8H0N!Xm e F'E~{8weI~3wRˍB5]Rb1"RM`b 8KaJR5rTRC1o~tyK) უL\&d!Qq#"p IƠL"k1dwRpyW(89 c&aӔCDǁ{I&yg-*({ p4 3s^oAe G @a% C 8eM4 wj)8prM-M Jj|ĉF̆J$eg)/AXŸۆjbPb$Sa!vn+M-)K `n ڪ6G?h.jQ=y8Vf Si-gwK۠^wmUk5HTPSz]]yjm6=#ԧ9?(iq@l Ƹ^@@k(R:D')v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; dQ=ma@ӏȈh`f2Hּ'A:P?zN v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b':@Rr'zӍh =; 1@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; Nua ӓCяn@<6suEԠ@X N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:'лK\ZՏG_|NKMWチFzQY\_=wd\".tc\"b\Z#gTK`\C4wӏϒZ ] NW22]"]YcntCW+ꅮ1s+B%]p+XZNWZ+{+m =ths+B UuU 7"6BW6~PAUI*(L?w HK^]v9FELW yj/R硫Pf2LWzi\#M7tEp腮^t"2]"]⛮Z,ǣ;xv&2#fOZŻZf2GB }Gyι:ĸ-XO?}[:=[]X2׼u5.>U%t-1@q蟾[~⺞˫Ӳ|yZy/Z`3\]^{ ޹.,A_w3roǼ8ZrzqݬN 73=u/bVQ55@a!zkJEzZYHcQApuEcZg1e1Pch3[%+]/tEhm;]J38HWƺ{ +}7tJ^Кʭ{LWCWHK7t3 RʹԂ9MO؈~n\z+BkPzVWHW9bGtEa{Œu]ڨNW@%C@[6e";@ s+Bx*z눮pG]Eof~lt"CW]o>T^|;yj?ϵa?~f9]Yvm9g$?D4Dkε{\f?nrqnn_ֱ2q~)fZFUv1c_3vujBb=~?=wmYy?bm۱ګxhҌ)"8v,0*#;*M~Q>bZ膘8(ckQ'l}nSVWm-Rw6?`5iݪ{JB QuF:\'}uF(-WgXKdbx]jtE(5!ҕ.v"ABWf>P*tuteB :tDW?cئ+"ZΝULWHW6j]`*{+BPtut3X਺+Éj;w"V3] ]c퉮XJ ]~j'!U4A鞊Aj!z+Bf?wE(cl=ݱJw?t\Vg*C9 Zt嘮ZmGt.n p5qtE(=Aҕa3۠E NV{$~4q>X{٠=t.&M'lU6lв(}ߋ ]cXcN;+".BW6ʹӕvJtuteU'`E7tEpꅮsmʨ Ft{ ]1Rϝ6ha$t弰AwDWT?n?sW^] A~ (P1*LWiGT;}Ckg?N(hxgjRl!߃n8|hcCZ۽uӃ1\M8^'L}0xlנsr락zo< ZUglD:#|Os3%zƺ -" @WR>V="_CHg.&.|\?|~x~?Yr}ZWCC*&q"d,UiL"q3ߝ?_OD]EAou-6oy__:du]o}ý׷$^ o.ѨZ~~}vG{J^cҏ.潽o*|]S! @nN, MvuoLtX|%]3I,4qCw1;]d=4k5 נy~;ZDbH6\ o̱-_%e݉o-쨂9ȈR0s-)(iRG*ӘS)|Vm0[Wp(._5|1Kٛ]z0#aE9cw8V>?{C}}ogׯwokztjsoK|e+H˗o9KK^ƎfJB~ev~N?o7|}3^-W_9FFu,]8KQ{zqgn} rZ ]J:rFJzl.:QڍV$3%Q$ 7@mFDtY:bǠXCY$^* H-ËP&@m?Q5\.Ӈ|z.'ϕ}׼ƷJZ|rn HJ^.\mzW.@$Mɟh`08O߸6@[%"N%qR'10x meRLNEx_UcK'i(d/`2F!EQd#ƒY(ʛ0϶'kj٭hkXvq,Z[ n[ɒHsJl]GĸGZ1I yI-KEYRt+bȄ "$R KF 5k8/B%"hCug>lI}E#V[jDUY#Fi1K^DfEL  gY[ .e@-]I") V\LB[.%bBY,Wx)Uֈպ[#~BzՑ⤙ 1:CI/$L6D z'dhhI;H!1'5b1scчVǡVև0} *ld7*ox=3Зb cBgp44ਕ߆$it/%1Eo`PC?b{.!8g~,uJ-*.zC+*No'$\3F o ʃ5F?m GHrH fFq 4!%O)q;BJyl39Bԗ+d#cw ϘU+2eT -`jX`2P"J %]R*f] J̅7"(II̅ugOGq >ף<x{?|U*Ljo^ =n0鱉AJVsIƌLytDEm࡮s4 @Y,wPRh$)d$r&ADe!֝BD]Iqo@tayXhJA8j}~#&Rn'9KK\-'BUZXu+y}3F}*s%M8&D6!@j CVɫ\1v̼NvlnX%^ۦqT嘲aRj/ѣQ1Qq9zǐ✜aUX@%fIE|Y 8ˀ扦32W.Qj>GA?ݯ)أ\s Zz'J=z k;wct7Q6.gg,{^~ N2Zb戧$ޛSP&*G %9/2 b|Ue+*Y ", x\$F(#X3}Y;{e> 'I9I1sC/_b @f* A4;S4YZDp1Ir"%IPbI*hXN3C m]L8EtcmB$k6Dw q`jFSrN\b4,99eZY7@ !zɥe6g&IF, $cG$ccLŸvۖ $.U,[A's ZU7.^BuoŦ>il`*ꠒ7pN6%PC /QpF伲ȹ-l-)ACz#B`V;#=}QL7mt,Ct&&\Y b!IPf.YQT6J$X{\"FK555\;c-tLasH[LheNw;o/}W>R$wٟLI xG B>V6(1G ֓|VA6YhѰL4CjeC1g.0F$*Gxst1xs} N#R\ԥB'# UIV;VwvKF x4.W+_ߵkg֥͜bW a/c\HL!$EVˏV&84¢ \kC{եӍ_Fo/`zbm$-fXnq$̞grYcS^ 6=*|Xhu0 R]A:$. IFVneЁܻD9jaK r4&7uZٝ%i"uaeNnCfv(A]9M~Ӕ@r1`OX4pQK3_w@o{s;N)5\7p~Ƽ5ދw̬w=bP!WkLgRwDAc\2qv.t&ɔ|33 UEVdcrĤn.g%岻pOO[}YA8sn-@3N_}6J-ѾZb9!୵rhs*EM.|hLQ9Ch*z߯`TP]{ŴF[?fO~nx7fm茵R/;s>yпͭ]3/wE/[N?Z%7tՌl byKB 8fbGb'679=8dW]c%VB2)}\HiXhJwrshfXڊh!~?-Q[z}qOf^vb~^钙k(0i,~ң9ofGI/ͫh{{ܖZ ?= ɟHI͟ۯ?po?{bꬫ > xJ<OgM/_д4hZ:GӜ|iWrGyԮ % *ĉwb#=x¡뚚 ԗ\QMoGq$ns$HF S>HJ .TJsr*rm*)D#}mcB4q浯.@mu^g:.fǑg&::N<t9d.fPҗ; B ǓvLʙwr\5qq;<'SNvSA/sN=U'qۑKl.449_|7í.q|/DϬRFo 7N S(eO O8ŻFH}ʤ? ]l.zq!-[/]r^>KG0_a5tTGƉɈxz)z9T΀ב[J! վnBK/+쭢 n+GP{S*W6ȏ#{kV oo]~n9>;M&eJgXR%?z(y=+ߍofmGJvs=!BZJ;_Azv#b.^VS<:p E#ņH*)d"fMΩ>Zah)c HU& @+/ rEHV#XN0M>&1Rs%x(Lek\ie do"6Dqhxk7_>xGly'SiJgwu8fwT=߅zt;mY;kI5/6UZ䌺AS.;,nt;R^=Mpe}6 dC%jݿmYͫ»+ ڣ浒Cot9j0wJv3鎊pglh[Ks1E};.蝦a\l䊼KbIy`OO,#uhy Nx)~yzGGΞϧ$UWu {)?Ơ%FFS "*!W̩tk)ҋE$Yؒ:`z}O/ߝ-Uoo}Ck E5׀> #9*#NPq@i7YlI'>p X#^@c gC.vqk&B2ǽXDRD} ܨ"uLd[X *+w)'y]3Rb`0 &g˜ FD|ͫ"~%X`p튨9[CGAjWg^<3xS-{&cz#ޕ̹+CUSV4&D0]]M =[#Υa(mRsh+>zA-2)TFAC溏4|;xu`rT?TrǢ6`lښiqX N1Epx6뇛ͤ_LۦŖ91@CzEC{vga^4W} êK7 sp$ dԉ[Ns:g@"IGQʶL`vέՁo|M{d8;_'6_O$ p19},MjD_;p:AzC'!s'?^\O'߾哾B [W[vo8)WE ?9<..'{g~_47]`v2zٲM/ϓ/rqNKؚ:c}n~[+Rד_ %RSLO.*}NW)z24B_n$|\V5eL6UG=O^>N_gi>?f/NW7qrMù~Ӈ_q7?]E+!ڔ>\FO˳9Rrʵ;Ȃ}p.~=Ռl,ٸAQǷj>_xw_-{,t3XknQ$ $nVDdItuw.OEh.5u6_ʈiv`+V_g{/_IυvVnfeo=p=z\^}X˾=KZXׄrvFCw7cnwh̗dÃs;^ߢvgcӁD vr-W/<|Gy2RѬ6fY kdQe`Mه w*&/[0E/4#<#/WMX 蜶U-@NKcE %j38 )Y J8AG#07<_Mƣ*UN{z'4i.=*ug?8L A_wZ*Cy w?)\g7hT/2r3Wֳ{= 6 PJ1aK]G>6\bYUZ )G|NSsEqMϕ(X䁊[XD(3 ehv09[H?X2kl;*~>y&eeG31dbJaJ>HYJuo;f+JV<2WSuE.bJmB_Fmح*R b}ʀ9.eB)x%2R,2Y F='[/6bgcWؾ&=C /~]{BȤw>˶Yvz2I'f@_}lP<42ykiV_6O26$jl 8bcZncJ͛q6\V`e]5BU-" Ǝ+RiXJI༦ g,T+kHsc4\!@ZB+ 6W++y>Fjt\J+WZ jڻ"]\]-"jQdmqe0f+ v\\ΫyOjARWVYej FRO5q=vT)h:B\9cȜ#X*^3ru=Π]9PL+w' '܏^r-?p~jwOٓAW걷k.W(XW$u zgƎ+T7 Wv#mL(q|p3z[k;5y+7j]7-Dw?l8- (iNF٥럞2| _ޟ#opnG{~OntWܠFd:霈,y)s)Imڇo_SƦS=ݺW7T+R0us>:P<~ Ryg$㝡\u-=T_?мcΤFpEW$שZpj>WRꆫ#ĕBX nա\!& rlE.W`LM U+T* S WG+MӠ"\` ZY PdH WG+c@EBϊ'WW+R;~ Uf]%UZִwEWqȫqITcpurQP2g~hcTͺz!)cz:9􁭫:V{oW} q몷JSo譩(=ΛRQ:\=WKOOIZ6Mu-fz1w;dF6dѣ5\#ԋ:BJjHbxg$X-գL"43)XYPP Hе9qE*VפYpU+&ւ+Rq* WG+P9_+줩W$WZpEjO WG+U"\`m5"RԂ+R |"9Lj+ 蚜AW$WTc]mJ WG+ o~EBPѓA ]}?xy:Pfmq)+C=[$WZpEj*[֋ 6޻'8CRɠ+p[5\W+y5"\ZszyTU3 Fu2fǡUk՟ܖ㼍0~RbǤy\-餞~R^e" FbcZgncJu1ƐԄ+W$WZƎJpuR=W$@5"NԂ+-7\+R)e wum$ehWս%`AyX!yyI<s-,:=3N0}O2ݖ(XѵI$䩪[U[9엣ܨBW@ NWQBWϐ'Mt5嬏 q1简ɟ(ϑ*t^Հ=;2gHWԂjQWrAutt5P//t|*iItV/T;@ozS=PH'p S_8~ZDVPtEBWz=VJE7*j?vxJ'sZ`O-8&ْ~A8*6ݝw7:*NKQ#0"o u1m7|mub׺Tg?lVgXa:K6S2:gHWMqAtɯkrK+(̩@yjoBW٫%0-gnehm8u(]zte7te7|t5.ĶAAW~tт6úʻW?_\K,=i·i[^zz&(_gAWhC@Q#vǏ=} ̯z]ҙ:3Cv{ǯtq)iEmDyRo_hUX_o7WÁS98ͮiCȰbԿ+M|~)qp> 'Stz?}7pMԮ^c\{o*P0Rg7zxǝx?vTvxaayB}* [_ޏ薉yxBx\w3S3>b?oлy txwءw.麭k妼]/r:rlp$"cCojv9#tq+:=k~{5&ć^_#jun~ j}~Y/rr']=pW9&K@V'eF6̝z}49e֡1{.ԩTrs5Uu<0v:LuH;q=wrVPcn{gɤY[n͉!f-Ǡ D[K!BMP![%ʵ`0fnڠ5WzNRl$Ztm y%$T,nmِ*jm-e()<95z-BKAb2c1N#1-.o({PtpQ-^rJ:njX+ ̑ߏDuh(smw =q]Eld*CdtCQi= !ƬjO~- oTh}GJGD=@#6ȏ$7om"MVT)^ ,s.̱Fx_ا\|8oNBcVU˝ֽ<RIIbUTw$ף(]_^.7:9Dm;z7EעֈS#NE[RHHHI?̧E)5>9* X )$ڨJ/ g6Wkr5ʓ`WYbR(>]ЇlTW)tO|E+ܔ}Q\` :`֞Jwb=5ڐ]jy;akGxHHS`eL9'XEA)1ؽؠtxi TsԦǸ gTg %ʉs*j]vut%,dBgҥ3cMm̭(b"$l01eұ;ulzEKxZ(; VY4o:dZ{jPѦ J(ڑkj oz2Lcs'V{J5Ajcf5+.@&0 NsGsYBШ8׀TߔNS4dLB j,;p6L&j Ubd5˺2r3JA e :om` B€("2m Y1bTy>MFݚ @E&rZed^S0P>x .Ne1%$0AɣIA'uKet<o 7<%JrA#P`ZDbYx¶0VA;> Ddm~+'. O!z*ce1)V1.{ A]jrm!/fsP6DRD :8Lv51E"<ѳf !nK)eL%)5cEX A5DEWHPl5CN!BjF,F,u=E̓'pDIȚB\s;!tf% XƉJL T "Q8TD쬩Z5 `Qy0 !뮒BiYfR]9Qb%.j^ n_Y{نIʣjd2Ԁʬ!u(m@\+wm[U #O},edYIXE ày6 G `j, ڪ mry8s>|/7]eXi7nT&7y%0f54zRTZ[`_:{$|vR%[Y:Zm53ZSP'ՓFCoׄ bL]$vz4l}YeFĪ8 <%2`Cr)lz@'3U8(=:)Jjӝ 2T0J`Q2MYڂ DPAz/z]` v+Aqk)5NnB1Eʟ$oQ^1 bp 1,*1"@u(Fb&zށ U @*CuHMqʨ19f;O+iQc͚ bӪ %_6m4X*R6#TK!s@ה=H#{*Mf>@McVkA\cUw&*C-*mP zxq Vr&-3U64EO(Ɍu![?Д n FГ5>dNԳ֞F'p*q˱ %׆.I6[1xB< \T*N6fTS.ʝwKC1ˡ옕HAH284uĢ:H_P6uvqz}{}6}!_VyV㓶w?vu߽ݧv[7wÛж+oOmܵraC١_ E3X]eRX^qq5q tN DNY%N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'uX\_"{1N 5[~ d>@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $NgEݒ@r@b@#h;q=G'2Z'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q=_'ž@Sqg1N u@@y &Ng8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@ ޭ˱tz\nDN_ͻW@Xo\q Yq)z×b\Zs%$-ƥ`\hRݫN~DCW.Rj|t5P(t *(cxIjn1t50hhɞ:] !]E+_KW0/pb֟|18P޹G8B)yr:7ߺҞPSW|]c^HQ-j1t5>r$gCr>r6D1-_}L%n_"M'| UJ e17Wg ]V~{dH$)?'k \1~P]aNo_<z`gYRIH"0'- 2^q8}upmc5G)ԑ{CH(Z{*CQo9ʌh ؅˚cp6HM]9(&8e!uE`E%X"fӫVx\ 3y|.K2=̟&FD'RSR2=sGWDb6W<M1-2o0c5---ռ-4y[Ni]!`c[CWB4P]=BT1a[DWXn{:-LNWҎ!]Ijju%%m+@K`+!])Fw]:/]!` BV6CWv!`[CWWՎhu+DiHGWVp\By[ ъ&,ǫ>ZDWXbm+DQneǡ+aY%j;vqW[5>g;roaW-Jvtu׮ e[DWx1+Ke[ 2tB\utJibiwA7iO)Xڢȕc,vyYb+ 4QZJjd&DQ}BR1H j۠o(2oL#y*FN[DWX'ָn|94Ԇ]zDrcDVvp}vh՞j;abt:kS -+Lo;T-M+@)EGWf4uMN(Kgg)nۢh՚ԆOza}'lse2#[dc`.mkl [ccZAym D;1\1i sJZCW\B͟U(mGWMڠT/M+DeGWҌVYC "Z˚NW7> ])ekd+ZCW*R&NWҘ!]i$-+lh{\`ԧPR;zteRMG\mjI[``NWR\+kȃSB7^thl:]!JՏCWzî=bߊ`w 1{^ oYGW۠6w+]UWdp''>!( ?Ὴ)4J%gō1^$?>9eb#^*ছ _}fzRP;[6]r멶Y\\g1™]Z&i3Jd>Nd~V=zѷm:w#7M!\B57(|S7eY9%Չ$T IWpa<]!d,:-aP 8=\"eǯ|Y`Qף\'P? 1K%eq 6Y|Rvg慎(yEQE\$/r4.ߡ,߼*B%Xo2[u;cٍîw@T5WCw׃>ws;`_WUX{|Ӣxx6R D oZ )2/U:;fTNf2*͖ '@4>u_sSrW4ijO3#)(>rT}2ϥaRkb<3֋g-"VS-̓%A\R=A~k^c"tt!xQYs {A.-ˬ>=*90gʷs=v^`a } |}G兖XtE*^6{|<268- yeeFDAbՆJɈ6B2CX,,N}_П;&0O4y6(ًW402rZ|TNnfgw4L,ט5%9Ɛ;7ήXr|sKm suy/)e!AGk&n2L!rZitU^GW,\Yc5x>,]dۅ4Ӗbƈf,ΤړI$gz&Uq6NWg%U 3Ӊ=1: K)̑TT̉(t5tk=.#0su|X֎[w;or } 9\݌6zx6l'Ϣ< o_SK8HVyP,LItdY4qAbQk2F21ƒ0\;basn$KK^@b(O9 Tp3˂d;%EΕ@nYj֜IPNΔt .R=Yuvvbg!k1|sx}n6.=+UuT|ϭHqs6t^ ,Ũ0'b&dT&=u x0::GJm 0Ry&b`xl:YK=hÂֆ&$q G_p0aD\4 %(; åns8zi2rʩ%^̙LHў#V0ǑD/P2 I~wZȐ(}M5: @mezf7n #{i$ \>`{]G/+hdTÍwL}K{_7yWΦ0zÞ>ߔ=^1pq8/kX$Ӹ/~z~[ fZy|nyT saZ3O-9) Rb9ql-sw:%DFΞa)!O}}9#T/N(>K:x)~wN;1gK,{jEǷq٫c?:S%UxA+u8e2pd8>.>U$`O&yPߗ8)`1 /VRutSI/LY%T#|GuUpr(%=FuOU_T_Bj5O??>inE<մ@BJrޔu9\=\OQ ?d ln#u>A`\R.쓼vZpd膨cvbǴ2]O1F]i`ʟ[M]9(&y 7 ?L]iKrYƩU8$8E/Ʃ_6,zIxK;ZA# 1Ti"(UI!o ̱y/@`;hL6Ytز[,oK$ۑ)KNN#rX+RQ۔K :l( k L帛ޤ~i66Jr7_6?m ^WmJe,h{DO=iQvMpGm?d㶟TmJܶʆ(pv:hy(`W?/K,HcK[n ,Glʝ-PMmw-qQ%$A9s Mice=:lN.cHx&yyW 1O 0g-;V(N(5Bܤ㳙Sj7Hva޴Mpx$gOZapAƐM&R5̽ B:C!Jҍ\eQ*Ss[~T5ݬ;%|n'sV)5p0>M!u_׮!ZUՅ}g:ch9$}ĻUb##& =GLBĒ}AXBNG.҂"xV2k4*jhꋰ4;mS GXZz_*XF d_ڌ;S岻m[9hG+js}|#hf1fpDQwU 7 6hGGN;ͨqїzjJ>(&E5AX58jNTo5^8-Mz^LE]1N-(UT!meFZØd-F"/#tAx}#f>4Mb|qsa9+ovojyrӜ!)OR X1F xC!Bg MB ,+%ՕڮҞ cE))K) P@1Ԏ6E¢ eARO5ňZ1)b-V[(FbOHeާХ5Fl֝1"kާXd9Z,Ӯ N_UZz5/oLtNdCg23M|؍Td7Ϸ7G޾^Oa'Y?‹tz5u /zv{pw|V/7.Y.9gRbYgQ97/л][Hb* ?zv ; \ gGGZ'"+z&b?Z8ۨ 9u|4lx|N.Ka xgpAt0F$h$p;imS(JT!HF$fiSHSpE6fL»h VUbIφͺ3 ˳5Řn~8kYC^lj|_.q7^*c\{ϽE䙟y%|2 `lR`ζ3Bbr]Q0љǶ`ova#$MaHbΓ X8N ԢP:tE1 Lۊs[i382ms?\Yfn^xu~̓a^.I`$ke" \f)(>2 V(fQdp*9ؕ7H[c .(#i}  w)Jkf$[g}r<9?,5^|}k8E%Gv[gu1MZafHvŊ(=:B"D 7t,[kY+ZC% (mF:1@:<.t<0'؇\<zѪQ/NS=zE՚Qm Z:URUO>aުUu+j ;.Ԟ_-~M5!!%BlH}V, &-g! P 2&F87ՌCɹ0U!hYT+ wVI0WRfl֝'7*ta.ԍutVuGulWI=ٛ^Od'/ jU絭k 5FQJ!D-Da jJ.=^P^JhDUUArQ2v‡L \IT2Zuq|bn:EkAk85`6IB /5LB5IC&)A`$㓒mlbBoaPfILe)fGQ|!b.6Ĥ >n֝aObl}lMch8hćxihd!yULYD O%Cfޖm5dHY&ptJ31yCM>%fʀ MRZ'j֝'yOⴛ >:MkqmRQ'@h\@3$u"f.FDeЋЋM!4ևn3}&zzIg5Uf7ލb#G} OVEy3~qL9cRE GHgrJʝ6w>{괫I30Mr;#ьmrwTGPA0"cEaA5k3L덈B\![nȺ'}t!4]^Yf3?=}:p dNLx%?(0ʗQ2TuS<[ͅvY(Õڲ|wÄ.gx]w]rKX.-lTM⢷Sc3tQX0]n[mag{pov0F(Y&sHbUS%*A1;Q-PevYݡ>e[OQh}h2<S(ETC)[#Q e$6t]ŠI0h"OoV6q"ie_SY_-dQ-Yɘ ^}[u5/bS"d|<˜>V6_"&?r?K;y:vч>^L͓)pAkuXY%!V>-d6ly|W_Mit^FUm|g_zkE$-QZ6ϿN" \7d%a5wm#gbCC/dMqxYC英6ёc.۴e~]U]]' p|*rJR7tUJ}Wt7(^'7:& @޾>}~DuO~>鞾| O0/`jU$ wh47o*Ar7iv.ZnY<,pZ>^2}^xuumFh]"#ב>8A R \Ǵ+e"gSbl2FGƆ99gk{ML@g$0dx&dQ(ikRL R2Ei2)]j_v/ebC${d6wr9t.WgՈ?pzkH-qsX1F: AnZ{1)5Xd]Ld LuR=b_1ކ$Q@C0HP.naG!@ZI5 r卾jꋹy78~TH4 7/kDɥxxE˒Mo U^KA|mg?bB|[a[lEͶHҒZkvϩM&VC +HJ )( N!DӘ$HFz0)fJ{baZXiIƥ&M3&I.^J PΚ&0bSo\(?-UUgI䍤Mc͋DAcդd.2޵GR(BtZzJenqٛG3JProjo+5/ ^yD P7y!浬ose7YE}_cϲ[*.2./ eKHx4ݟvԥeHO+ts$ ݼDnn!G] >@--4GX ΍IQy|[p|o""|3|;@rUR^%P3Ri%xAh<ԋ~z("R \Er5>j%RWʕ-W\KU$*P˰wTR#+J"L(:r9RW@opTjHW⫁ ȥ \}+R#\i8?$e \Er5=j%W@´ԆSZ3L~' zhn6ߍZF%cWpv1\\HWZMJ4pJh8^\%ލRNHQd,1[^rM3H(򩖜N)bxЄ=^ S$cD5:ȥ%beH%#e JT VpTH.WZ."W T %bcF" \Ejw*oW8^`Grᜏq0fW@%!\ "W`z8`$2{oTJ#+I\9($X&MFf<^pˠyˡ'a'kIJj|K]p(?^/z3ǚvijMH?JVB {wqE+֧_pw2ǡoGF !`=3Sw]?/ҍ/‚꺧|{G$ c}0G$BI GR7ǣ86 |M>&DhA<ޘ1' iIEWnj7 |M>:(ʹv{ky"ֽ$87/eƋD{LC[XW 6LC{3K^Sw:%1Yr:˥ d;y4Mrkl2nOƮ?qǿm=} T̘IhXi,H!g߆h_s'*Gm ',Y$,حFl55@9&\"L\>S|ʹ0 +Y딳?^ u(s2zs*r";O(~$hEKc E/.(\a/dzZB -U[B~kDM&M@di24h 45Pt8ȋa^5-<v>je04SL f<\H5u֠jYh}mIfRU{IUՔI9ۭ V.&!HH YoDe…L+rRH5u!ap) <>ViǬQFYJy>%lgK kYͦԵ@ӽkNGKaK/Tכ7q_^߹bZj43tUsH+IZ KAHɄZ G2X`%c _Y uDKx 4_:;"E Rr9&L* ˀ1#AyxV ʏ;uxBͬ,,r4|D"<xD(E:DpAp*Vl5[ښ6a$H Zq'F0˕CHϨG~*nyH269jYTAyV ,8pȖ{ 牄 OYh}|\qoE&*,,m_m?{cL yxՎt&fa&?IK#t pbg/C@"vVaILqGwbTq]M%k}*K'EhZ_sśydU0))_saY[cbI4w=~CZ*[^׷tQ lFRb9a0)>A|'6{ Ss꼓UV5&Z2$ÜGb/q t6 0K"laaݿҕQ]Q^-gٗ7qCxdx1.ֹ8XzB/nCq|$0F"+ȩ*UE X.F$?Hv<{<<}ۗݧO'ӗOa e\t"A˝I}4";p'4iho4Ul1o.r] ܅!Dx1YH|u,Yצoԏ֥;H^-奡ǫ#Q)|%q@iCAWD+0 2Jc e(8c spIs>׾;(nӓ-ttFKN zX3adFYPbE=DaU9өLgю˷6Ěmher{@!7E}[]dguʕ::vq!%TٝgU#r%pC%Б&6B w0  b+mDbHP< ̋ʃI/ "X4a<Gv pn~ͷ|K9Z^Gp):R E41,0IMu{w5gr]l 2?笠e 3w*8Q%!葇E3ˆ^ 4񌅌be}0RSFD$EV @\c2k># ָ6/_MiN-1~V( L`=r7>]v11ye~mm^ p} VcZ8`yqL eSXS DDk\ 042ްحRsΕ<(#H% 2(`ExNj,YFɐ!}BX{))nœћz?wfBcD]248:3{eq?RrI<p<= TBOQN@ G EP=D2 ƷSƟ!Dݠן;$qJ\Iw68FA'kSA ڔf':K ~'y6Dž]"Tջwuo);]qP$>obEgpw0vϳo;j k {}r$"7W]QamM0͑(ӿ*FbJ\\A:KcWN*[Nexv`w^ŏĉcy$'w-$[v,ɖ(mtnV?h2"U1ǫ~B:}:?dltr5a|uxɇg"[9֟s_.o*[%[:5FʡVCId'hoE2^}-q#wx֩ U9-pz5YXe,I=$N V7)D^d>ǬehC)6jfs27K2̓!^q8mtJZI^%u_.贷3c&,hjňW7H9."ؤ˲|z?Is'Ic6& yJ÷,?cʢC5*˸*-gNF6O<7 :Vf㉢@XQ,1&"׆RV" f^@DE)1{͢l`Ke/5k ϰ;`RՍ5rȺrACL8"h1g@ Eup zM^ ةr;a>V=c;Dq=il2q& l_}UڿeB@SHW erZ8;:P`<Ʊn0uռp{բG-J'w Ξe1.vJk% ˻W)~v8N艴AC$TTP(iBe=: 6Sq|$yy# 1 0! HaDaZ~ҴD#%ɔ SE7IuNY4ҌC;ܸhd˫O:$QDs7-F"غ؜t ٺɯ\;:dbH1ktR4ǶbuR,BI&@Fc7u eDƈhIWqSIc`v 䋔KOG.Q񶱃432T˳[xDK->E+]H&oy * Ay 5:!yh/^9 id ɬ(eUOsEcA1#g! :Jx_WW:]h{9혙4^ g3 6,ό sGZNlki灌)v':l<evDaW(F~L9x`kkzKaiBkœSWNہ(zح&Gtt$tzq0tt를huDc4׎Y4tGZJFpt\pdC" _j BK>8QTU"`jt6k &u%Mկ7~jL3S6r/LXi7PQ5Ȅ#vZØ$B^x Fm BnzҰi%O!Ur#9xݜߤٻԼs~H}OcBF߁KTx.DUsx#yrpD;=C FD<40Q9"b )ƀ2F2V9h(FBIdcj5TZCCNP]ub%@T"J轏EuB91"GVR=YWxyj;Sx=U7C11AX`pvyUdօs~.|hQ SAR(ldp{,".7c{ R A8Ved.B#5 H91,CDB\JqU\u7 X6'H:R"}}> m>Ta?9N^YdH>WL?xͼiM7??_~g|^kn~72>_Yklc?+a%8__Yٲ-'#Gy9xCt ϙ.ʡ~|>]\zwv^7c|NO~Og~ٗ_|Q˳' ]BZL/f{_prI]8kp_ {{£˵ZqyzvV0͙Z?]Mb>~[@P!mg1Ŏ8>EdFz6;pQ[N;Wd6J.Z 4D >([ Дb8vD̒QH1d낁DQxL*6)Hbn%Eh䪬y}׫۔pKn#<D 8 iNhy/}fwwٹLdL ;t(L$%1mJ78BD(0EtMҵ@V ykbc䲳HVGBBd.%m Lj7g EyK9R D(RRi d֍雙@B1d Xh[~t=lOCz{h3l`܇yKɢԶғ5c&rYض pZ=@gʿqPQl) {#-G '|lЫ95 KkE#.]j֗8٩/ﱠuq5v99[Jjh6Ӥh9ݏ\Iul2[5?.M5! qBd֨}R swpS 2&8TH6T_cMp )fɗbZ%+Rab̜푱Vi ]cn   -v"5`'rIo5/OymR$pFk65TI*佦 jbJ(!Ŷb1THiUæ.Z(9vcQA[b.c2Vl4H+s(QJcƲZ.-!8PgB_&jcv)K 9F2>*.D 3dF0:e*e#(>(H"2.nfxؓe]#iLjiė] N"`'r*$ Qp~XKb$mFﵮ6{U*5PMJ2N'Mo5Z&2j85fd׸hE7nFtt lFt(FY"xLAf$NR¬ц@$qqq(5®ה}]u5Yl?4>b= n~l~ķB*׋nŲ#z9p;g09k}/-!așUrɩ3Rϝ*t!!Z"NO@;>6R,fB!!g]`(5I;`7"]$ZtceKvʺOg}uRyp6anOVsϟY Z `ԑ OU奒A0 RUė C['~@רˬI.P pUIy)ջR<im4P}u{Zo&9KO\[Fkkmp߄wkw2Wd)ARaۅbgb" 1d^Oaxnb@n;;[4 bրGu* `UDϘTGCMf S- DODᴫR A%edI *y9)ٺD+s@m%pΕejSIjLc^#bqz6su/>a ^ ȠsI3O&!Xb0g=*%&MFy!_0k+XgdY*%6ߚkvju)+@pCYSy̏dx"-|!tQY8C8VZ"HRRjږD`|ek.[,Ga8OQ."\] ʻB}R Ia ʝE+1"+29cyrq|$Ir9 W CɖE4>6 X׆^kvApڪ+#-1(A#+0=\e8k?48,&| Lz O;}A)DD>ttlOEvvT(flS51}S.n=P{mRf&{_߾7'FzD=#C0eA߽{v[&Hm(k{o>4~<29ꘛޠV ShIyi-rexO6jnش } iя7kV3w'̟7'p1?~Ɋ;%Ԫ[olH{ϮilO)P}hVqj@`& Uӄd6e9Y5ͬRՍu17%nԳg7%CLtg†5*)"}qY1ؚ\X.;kuXY睽ALj!>U1poઘճXpU4+ybvU;TW\m\U1UQYXǮ"\id=bb6WZWd%n-Jnma GeXa}nRp^,᭣' F?%HucMOME8ZPz U]?MEbqK6&_,ZL [լεwWל[&m\!}ϓ#0\o]Sx 6 _"Q&h=^'_ QjO`fE7ZCb%v orY8S2(!`FNƹG*ѩD`c1 SO5^RUryx6mTjnկs9`J1fPKGWh8s2jQ3SWX%҃pJ;'<0Xfg߂AbVy1 獯\t9OxH;T?cR2[=͕{@,6~g͊xsY{ҩn9sY4;)GK_m> W GЀ$|mED53[*v#GKFUtA;̇d4YHp F뭴]i,TX Le)0+?;" w7_nӌ=eC T D9IţEcI !Tޥd AnVS84ִN 9['i,ᷝ:aҫk7?:j;/&U.Y(lƠFin62:/eSm%LnwN1b %L  >e$&O',, E "ء*GJq!rAZ"Jf3pIPA=Ƙd\#V3iHCbs 5b~d{Ԕm3o?^tbw}5 |t(Nή:9NήD:9Nή)]'urv]'g͵Nή{ ہİ~|q55:-\i-Aw^4,Ory q/G[tv㖽@Q1TFd#xK$=S1EɷQ!r؄"1CNYW6ц7#WODH҈S*/ٿR^E>D}BZԧExx; xyFĝrBhCCGȎ@OĠ#Otw]ݎFܡ"GG`<+欐A唥ЎI2NΙFsRA#Udu^H%3ځV.se*X͜gOp`˫ZhϽ޻X4) 6:߶y[M.~oFb:J|UifEuz)-UtO?f * "4BFՀQXtca:B4V6]+ei!+v$qDx|T#p%9]T$e#(I}t5<45f},w_(]O',o!~^yK3)pQ #,yͲ(w̔)*sLc*Z7Y,Z]Sg'o0 $ Xs`v(o-sKy708ט_]V rOgJ%"SsKɸ8qhEg2 QAQ|mXfXsˎuq oաܽd7Cٍs1Xb31Ȣ5|'L2p˸ D_]cd׭ËxS* Ì7ܦ=pcp[,FeY5z+j<ӡ|wYȪ><럍_sسZ qKh}/uSGm5 Y1ǍlHcbnl`hsM3 ӷl$N^+F! 0nq$Ĺ(dL.J (!3@^t,1ED cVrgy[~V b."BeD;D|\##2C1".ֲrH~[W7]p$X="f2JF 胁( w+xXmtXzx6"_w;v|r7p/9 D=>E?Zwя-0*!(0S1=m7?Lۗ[WF^gzg`x ~?xK2l~Z/kJP-i^i&ɲ-=au;Z:pϜy<}ǓƒA:@/Dpk!-[w0:hsSz+7q P=X{,4g&8H+F_Um !'SlؘhR#u1(ﻸ헉ۖz㶥ܵ meKme \ĘM2Q#זDT$X`̭%R|u~&2< REn7[F흔riRQ }",\XdXwVt՝oW*|κwMdɼ=!G+s?'|N7f,k}vy~K:[mїXfLsrg%I Y;O՚qTc&2x^CjrR˾B'5Jr`iFO^&2f2xEdR<+C!S o\>ovTr5-:NR eDo"8!hO![?4:*H9q Z0"awC-KNX uJhBL_2q%M*3W8QO]ad}W&9:yD6:_v9H1J7!n*ÿ5Ik1.L4qRiBFZU+ 31#)"4ʫn"՝Hrjg%#Q9C[Sv~7'@#N {&0p5=S4 ś/cutBqiɿ> ;v=Ohʭ% /8.'Gk{k#yMq]Mj?dvCx_]dG5,Xg8_f>LL&:%ɐ 楣%jfLjD꯮ޫzD ۨq&+1*0yjӼtVOGkgqDK.o fIf]K0`eR۵Ñl8tH|xq oٹWa W$)+Oz |"1-,ZS/]|4עgGhuzFV=@hHx?,Hz0hR7ӐbhW6*tmF"-ҳ짛/ 0p.΁Kʤ! Se~ h4.-ͫ(PzR4UL}_6q0,?`| /@oN_&|?7o1Qߝ[2 ^- Ј<G-z4fͰӡ}xp"-uVzb[K hH-aM1F: AnZ{1[,AP2VJQm [2!)ciquwp]lF7WxtE>_a6ri ;˲߆N.X{wrl?GMбQr%ޓ%g3҅v ,J6 jRs_kZ9nmַr$IZ97e/jU^@rdAIq !TL!%<@uԞR..6UKb5D3 D0L#d-M9G8n(zmx$)vʤB,V +x Xj<.Pb11NrAEtQKu O`jEP keoo (jy4BiLX7tT1 W::Iӽ@Z'7㋉[*2[qR.@Gn|Q 4en)Sef+/&950쓬RB{l 5"JBdROr¥M|`-C e0|$ҕҔ1X(@׼;=PVQT0K=l@꼾 ˂3Ԓ4nf;+rv.jm].> ū×aE\|tK,&5Wj` Ic{!9.ڟJeǫNtY W}7 d%7WӕmPZCJ]]ǭVmdorWDЫ{ qՏx,T*v Vp5.XA0IہNڸd't~49b}9' '$ʓnȺJds(xꙶt{H2pH )"ŋ4[$[}A ly~6ǪY8W.A #YW:"Bɇłpc5H_ b*ƨrThªqRyR;mTu)gҹI7 % I,.:FDFQX0c sH*ƉB[,X,^q]$,f/0,DHubWNj J0fo*Rxd`ĿZĿt˓ grѝtfm4͙+q1X EGgI4:|\0s,MWc:%<@rFϵ,wJ=wr_:Аγ,xҳJ0%U\ *N:H-pkҚ #Җ -VMpPe8< Z _,a\-A#8Pl}9BFp;M LFS^wD@dڹڷows$-rARgU%qLY+KB)/.{>7Rol />m q<[{qT_7r A001yZ@lPV/o͕I~˄K; {+xw͵&{X&DSpXo488(-ty`şd}s,~MQanMGWaQ-KWfNO*Y|2eɑ: gOFUrڬziNWjXLiYw0r̛Ҧ+>?zWLpg>Ӑ/Om|{c֯ |ug/:Ο3AO6D ;4x0< },tpϢ݃K;8 =ɪfņM?E%ez<Fӓ߆Y̍), A*>4Q`c=@gπ PF-QI:*xW'lzr-rqs2rE@ыb5+hbnw NBN^/S=e+} `s9=-(Nxh;' 'bJbJ_Jxs>>yobLV3h4ci6 xQcT}~3qѩSj#3rj1!o9[Ӝ  iu6 AheePox5McZNyܤ_m.vQs#CSiSy-vNOxV1rGO`n}( 2@0 bE4RFvKOfľ>륚j0C8ix*K2@kV"ӱF sP@:E \wW Eި۵͍"Uv{~oek U-GQ*"x;OLdͼ14IԕY8lS|Y,HˠSf sMɹMwAR8ҙr8h{q[.ԴP`Iy;՚MD:Zʄ2a^!X5*IZ5u[u㇙ӹ3h1(0g` T`0:l-}8y[e qD)P 8N^Yc4_-5]?v)no𢟝sAe fFXx>_wM6Fh 01 2 ܓH bY}VlpU+=/ ؙaeA0= ^Uf``>ga1^zs]Lt7_Θ" , PLu 2-tjzWEߝb 0㭑1\"c$B7]H(d#1JH\BW -mJPN%:FJ+&*V9Ipj ]Z֛NW %]!]L˅n ]%]AH*d#+Etm]%5UBh PrD;:BZDWU+ZCW ndwtut%M B\#@p*]#]i$k$Ɣ\BW h<]JC{= ~,ԡj+X-=L-Q- wtcLj]%\\BW#tJ( Juea(8(äiXDƴev+ <>W2:ſ=a=7l.HyM[3tny$XLjA=k5|5XZ &C;Hzp>6/h|j9MLrJiZ &zGOSEF;Kp)nv2t,ΎP;X ZDWX*UBUBEGWGHWMYJt{ 2JBW|+r񴣫+N`' TT;F#զ+l ]\[cYв[$wtut%)CH0}Yu刵t(ήnSQy{AKyk-gt(%J3i]`"CɶUB+e*QCWdá'=?j 0G 8Vh:V(u-ttcNiή,hk*U-thbM$k^OxgD%j\J'mڭ(m'E6?)%bw''%\bZ$c`tkd 2ܳd %Sq2Z"Jj ]%xZ&M+@IvՋtU.CmV4'T#+.`~/`}X]%5c lRt1ҕH6]`EsRJhEJM;:Bݔ6X\BW M+@)֮/::RIowEl2;:F ڦA,i ]%5`BoȐPΐ+RTg:2\ nv(eä+]юvz, ZVOQNUvon%fclIwm$FŴq%`Y%kbhQ!)`{Nŵݒ(-qum@f[]d1<}Ӱ3<9>8 0qu}CH_HEX\L鷿}5Zx/fB]CM{tt)>@3Mæ?]MrUdh~EqplGksَOHՁP'DW=pOG`x;J^ ]Q=!`㏃7<{tQt GvZ ]uV ]u+HWNhE 5pI ǡϞ:^"]Y=+5"]Y ѕ%!Е%eԳ+KdKϖ}x_msyQQڷly=e6 oZD6k.:m8y޽{ !~7XioOzdm^[޻&ίt k@o>uEr)A_{4nzD7k~7dX"jGZbJwɺ?ܗ~MƻLY{m$~D(^ټ0U8cCeϨ@EDrn3p@|}uy̔w 5 b =s.UmܬZA\i.#So}TƞP"|Z+[,*IY-RI[iGiyo5W$H#>.>C,kyݯOp)ZբBl T}KɉѪ8-%h&(AU\P)&jQBr](cZ" ύ)VMc"9Ql=NΖFi4H~XhHGM\ DA@1WAߵUl"IZ2 RR(OvLX+"TyᓗւQҩd FDU*4פnEƖb1d]Rv-4y9$P,fL.IiR GE5SYLuL-IFRKwA(`2cRV"16otʪHR,-u߭!# Lu2dYZH[ІiNDIiD*)5 F`ߧw@p+XZ\췙UdM~%|m=<2` bw =?u}1uw>fhiV:uH )Sm8jm?oB}VeDM*1ڜ1<'*RI`Ef-H2{g^7J?& MC`ke#8:Bҏk0uĀ,ڀ(̘`MDD/ 2%IkSm'MA |VQb9kOeFM*\ J{+[lڕ̍Q1K"etAڳRȦB@*Q7W*KI!o0L%Dtȗ BѸ iXoS \BnxH v+ **T(:Y-Yy9mxAY]7n0P)b&[zW`$hty27p,Մ6!:*tŅ%̆jH,ƺ8R?SISi+CR7f-a)-%}bC*ZeD@Ww@AQtU:UC`_bnL Vor3E ] * $(l`Q&T$2zlI2 d>^T:FjAuPaLJ7#K?+$]45-^Uc똂[$;F:!.%) >@HE&rZ!d^1P>%E"`z `a~xL\t辤"$Qkusx$ ې, x6ܨJrB#+QQgj@bQI9¶,9겚 A";> Vdӭ\x#O*q>e[}%0)VXD(;Kb f R;ȋQ B-r!P-fH Hޘ jY$\h B;ڱ ,<  &vRm>Q5H#ZMV7t-A̓7pzDIȒ,B\s EQQIbD"~4"$?<B=jw7Y jZ,*Ga'D(AV8K$@ A1)'r(Zsl<~C-`e?C:jϢ;k4Q8T52˱ o JP)BM~魈 KUm,e Q IX@jT_hkٳ)D:gj6q>\]uϹW&{  P7] L3JG+f= kpS`= ENiVnYkZ(ΣDjAKDm1&P= @rTZTHväDy XmڔRA6G=#ʍZ- hGr9V32TAJ`PT YڀDzEPBz[T}f^ 6B=߀yE!XIBiP'W W`!G z׻(/V1ZSL R,ʨ1@jJ ^LYL 3A l?옄j~P$pԅKq%G@B׸r^G~uE#BPQցR o`j?zп?oiXonֆf6ͨzM!4\kJ?oP~ǃCicjt S$Y4Ԙ@=).+tw%TKZe=Xwr}}WuU]G ~Y=\=R;a/x~ޖ1(2>|ܶ'CCMvks_jzXڬE_BvӼ8'_[iڿzh{/i?Jw~yڭm Fky:+5cڧzQ(I6\@ÕP Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\[ޜ 'd W^!%CjLllzA+Bpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbշ5\h)=t W] m3\<{P*͆hP`pņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wl+'A5dr2\1'co WNB%B?o+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbp5K w>8Տ}EooP= ;—ݘe7LrGxmPY:%XL~da@krgC3J]uޜ ].ЩUGs< ]=H?9]>8F<-]֩P芎+bz;M?([@^? i_)=V0EaΌ>{דbݴ7ȩ4*t&ܤ͗l"r=o)6>Y6Sbϛ#A|l[l1.L瘟W7gOYb0NM_GWTt#f>oW(on7rydm^?{WƑ@xeȾƮ@5b $6"i[o /$EJ5Ihw~ 6w/<6,yrf4x,\psX<ÜLN׌db0E<"21dm11M71嚏db<mvj>]!`aZCWWm>]!J#NtJPJ} kBV-t(<#+ID.m+D+OWR]=BRT/}s_ l.m+D+Dҕf+"-thn:]!JOt0Lci]!`Ӟ#hދ M+DNHW*LU]!Zx]eJ9C8 d8 yhapBjg0aqW'S.UkJ[CWt(:c+-hf&)ͩΨDwd ;۞rH AZ3@w-.嵬Fz! w}9 @i:2a.OՂXՇނ|fc,1tԠ;27HIg*DE@ly*N^ 9}FyNY'@r\'sAM].$q OyrdWm sqdʭE_5tE L}qKj ?0Leuh*0:{3<TT.o3 >T a(6OtxJmڪ՞:+H[ ~84HWRXfDW+B5thi|dDWd'JB4=#]i)-+Y{|WWm Q6]}2V`FLk "ZŚNWDW¯2U5tpmk|WVqBD3ס+ЫQR5༼ۇ:HIW< lR'S_m]!`[CWW۶+,!DW[I| ;J0ectk!l-ڴ q24Tr>=[* XAl,&\$:`PlkJRIu2FMXɝ̸kG7+XjϬ]RSL 2 ![d`KUk,#+e[,#Do1U'ZF\ 45CWXpi ]!ZNWSrGHWBҀ4KDMCV0lI'"t%a)k]!`M[CW״Mh5M+D!])m,l]!\B7%=-%]i9[ `Io~yb6vnVYtbwC?/#2qFҫZ1b5f4[gNݸ80^5%#e0ՇEx5AmW]c 09#CQ [>.yURbkUQ^4V] =i7&b+qz5\mxp|? d7'c#=fꮆ`Y gAc&S/ֳ||=uX+[&|vSA&kcAR!)Xhf Rai!s` @/+ [ܕ)}w{!9(^GyT=|05u)QHBC ;\&VS|+Zu0k}`8(8.uQЭu6]8Y^o 1T\]u,ϖ:Wg!" No?kGe ^KvrnN~rRrk=dMLn/z:2,Jt3tEK 5GUGvZ(a (Q + `ZdGpT :\`Vxc!0[FۂVi-y6L sÂ4"9PD [5J@pߦa5BcUyheGIz(u I`1eT4ؤKEҫ7!jN 1ѧKMM(5 ,=O8)2EhcGU&.1,UY<?HUVO-v2{3~QSև#QE7&)R`^C_bIy¤.s?I*dX>+;\ͣcW4^.q -\iO |kÕP|OXTC/Ie~|]a|2fضÕD;Qf~yo:W! xȠ7C ,01#[8| oٹ4|VSz&!7tjօ%(mLQOЃ؋8EG:qumnk+P"Y)Il:KoG*ukVlTdH]Я [5L~XQU,of72ճx;Q [q 8 cdG2c1̋PSThInbXB|J7 Ͼ9|_?甙gOa4: X}o?%>g?PO~ZB)Gå-Sev[[M\]&Z#V1`]'ï*ٕE0S$5җv6,%?Mt\pË2㞎񅡉ѳ$$,',uH$k WPtEpQS# Mᴧs=2Mg{oI~3M&Ȟ^ޓk;7&)$1˪&uAT1w}XMӫβzPV,{,e/';2!Y-8>w<Ăr4/e%Kr:8^E{XC o)>>WT # | 9;cxX ]sӤ1(=^vơg(}Y*H7ݿ >lXbsqem[o3E@wk$]TK;Ǎ4ᣱ&Baq}ҰNw4xk"&D(VzPrڡ&l &+cqusp_]Әh6T}Hl f*e߆NX޽9ozl~> X4}[am0%W'$ߖkkmm{7OUWVw4-OzM!-ؾP)<>\Nk66Le}y6_:q4ߖfŸ0Vdi=Ũ\2ű,nwƿ=9E$sST0y:BšΈ99ZiVз۔;b&,9xERk J1%(/8g`kY.%ː2rctGg,&jS@ (+HV-RWxg<:7hp[ua5w G͒aD%C`m7{2W}#GH[}F7Ye-0=}t0f[Nb|E.ɩjiWgyDM rsJs#lkb6O]y/۱ ۯo`Qdn.]pş٩`E}0*oS x0P|oPz0~ 7b*Μ>t `(*ws eFɍu2--nh)7^vZ/9wlݜ.UtrUosc7QvVrK~r򪜦K7WgkʿKZ9խ6 ewm؇B+#t^?xayKgnIݽYOd>սz{nyF;[j>2_0ŊWo):y8^}[#c)/7dsL~s\G`/uT}؃p.W[psv@ۺi{ OsA9wMnPȩ1\$rᘇGdf3.qm(NPs(9TANi M0 iS0^xF$pQ`R"!h#$3s( wI4_\x>ڢ`L1G=Z:}^ K $ڨ h2mJ2i ՚q3'9&k>Z MeRam3#H90YhH)B3҂ljkwx]ioI+Svy/ӍY 0vw?BF2mԐvDuQHJ*UEUeDe%'tRL(hW#w4̐&mdLmEKG)hPH#,]& B1YWiGG$aFX.WK@i>h},Fgz/<8A~x[=zqWt݈F( t:0ǜ!Dv&VrUsE ӯdr8D=g+8ҚS~u^VE&0't=~箻)m?ۨK&6'3FYեDvy*!%eYQvB2I'ST$Pr  :̓KDt1`#&x[3Qj ]iUBCiML$Sm Ny铵Z!;Vb#]V% Q5 F":z}2T}턍pwm;xe[Tw,OpE>ѧVM;u 0)6 mrL=qX' E/E*.X ^I|CrT>iF=ڬ#?$7g4X =S|hrq-Yq4<_}΃{' 7hp0 ~afz02^4f<fh6YhVfOl;bУ`<ľ&z75wk(E]|+ nseߗY oIZ ,oVC^RطyTMwmZ-9d yeN+R%@S=QnL@;KYf% ,twE"~ӓͮg6ϣځu~^1 X.: ㄩ:̚|٧Kx! jKNvgw]md; ڎT4s&51nm?nvmޞ*t3wwA!ZH7h0|pQv#/[-xW$q4 =J6W.B\t^drA}HN[ \ ѣV2im3E=ԥ$cI;yU'kύU(}&%ӸC;0%]rўБUNkv&_M h@s|E38AwU1GGF&&RUѣ+L\vj#[2* c=y @$Qc%?P0zdwU|4S+>a/`P5i1;$H!g'ty8=B9iX|u+x`3b;mooG ќߤpMwDuwۡ^cg1/SIlھliO<`r UaA2V29h(FBId1_5ňQ*1DCN Ru)|0*%I.Ef ]Zcflƈkjb޾g{UħF^ɿ;?.[j=^B?|=jGg\XZkW+AzFn96׮wo|n1˖d(aR̖ RLBP z(.% C;y5P,X_"|֩Pv9!/4RG;m NܔFQ| EV3Q4чd2gRseIA2T1mffz=]χ!*Spu-^WK4 I\ڼj(yݧ^{ު]pZ`FKZ!yob )F畒Z=H=Em.ZrJSMG#vlRV-WJ˒HWem{x" ?|.* 9LcYE᳈QD($r2&*SlɯP[uuYRLSJPfWWΖW9I@7:aY`A?,/Lj ;c:rMbwrZj+l) M&S QT$ǚqy ^$kW(֡jqdPzqՋT#7?l|oL&7rNwuHuOw0y;MoL+AMdJ!Ψ,& Y4m0&DIMxl6JBd܌B )d%{vϳ2О ~ gOH̃"t6H{o[$cP)v|4|\&ɬ%xO4AjAijkr@ȈhW:bE+hF9#BA_cGt,)I>J)<[EL`<:~S,P9[Xgg.VH\cɱT ĦI=5nK?Au-xbz'VMkWRċf8 rU/K;I5*q<[BK.Tk$Nhf/PRCwNټa]Ѫy Gdjhiy{$5ʛgԄޛ1B&Z{bOI-`}Vq$ `FFlI0ƹT'̱WT<)fɕ F)+:263gfXY,l62 uXXXxiQNJ4^lOm.~7CǟχOaHhAU8Yy=ioŋl{ް9!,^l5ײOՒeeINfwɪWbB%=+y3!xX&dwSZ C1|L" )ڔE#\4l;f]%t)苜56Φqw֖ 4oTedBg':{MY/ vm^$@'g9s2D ĬLVc6޸cS}hzևz3}q?Y#6!A6aǃE?:P#WxۮUYT*aQ <6zX{^βkV*?MQ. 5֊,$Brkm.9S 8mo'n[ۖr7-moDKq2pc.igǘQhd2BD*igo ԛ>?7i ,Vs(<+6 -(t\s R$E=L a{}B̍RI%V8 VҪ })I ^ 7ȁ@D"{};?io<.+f߷R% s5LI]5zx r|~ТM3am=n\mqqܸ-ֈ^7`%m9l0u#ӓQZ.s{*##&f_.f"Mpj7p1۸:n!jo~A_|u94eiٯ+~Zaq#]MQF`*_Mu5Μ$k\[@rPxO^2N0厵ȗ[6sY7SC>.oii\2N]c"JMwl|~NZ [zΫ9< 'UNԉ\\ʖ@ff%1\M'F2#hMk哩HF:HRO؟Liή:NS8'(k/M-}h'ՊM%je\fӵ19y:ʹNy+xuQr1a ")"UQFh)$Аq۫2c[c{ұ+?h&jlGC)y$̀1d&jF O*ʒ5]CP^qJڔ5!Ő$21ΘSJpo]e kƳӑGp )%@kyp{bA$O1Gcw25yhmߩz#2Oa%:'CEl̈=Z@hg,Cfae²|Ոk)' FZxT }htP'޲7Ky wǗ5uS|v'`+zcWВj<\zޗ/oH>!w鬶B`r?RVþ*׶ o'_pJZ;Pg=m&CJXH wYkY)̈́L`l66L9+10`>Ak'+5 GeYP&2Cx6漤1}ÝqwQMF{αd%htQIT[@j4J 9A%Dq@p^xpz}<CB݃h7>"Aƈ(jNVZ)T}yq>=X_." XUk0Ǿ|»X܊l7+ɪQu=<ֲ͒èyKill~&ò<}*6;K!@ĬD!0yLi!u.jfAlut$uhNx~lpr5A$u?,'>5ӁD30n$ܛ$0Y>(D@&A#lE#^nxP b؅bH71;Z87F cu28oT C^rN1>iGy U|8}BL<5Z<6bEtn_ph_Y(PADg2d)]N9DZeF1!#\ 3CC,PԷe7 \?ȞRJpGЅz> gmtUKN!-sDK&C ,3gɋdS`2C1w,])Qadp` 4K[^toJdG%OӤMɜEW o䚵w/_a:,?JM*`Z_]cԊAMb-!q̗x2~(fJKnY}~\ W{OX|?//h/鰣ɻ4.-?k=_sv=m۷n|y뗎 6[@7ȣ>Tjim\uM瞚u[}L_M.Q* ]Sn,>\j@|*t6~6R꧆B}vήGeJӯq~_yIX}5gΔɫO4|w?<4M :yM":xNZx|> 5xUڍF@=>914ٴD}LYLC%/f3ϗ~Bqbܚt nQ5RJ>*SkJZhO~Ȃ}TK\ h8~}@Ȼϫ ^C؟&̱a-N/t4ySn*m =Dp 䱓MR-[̖EUPX_MwOS3'8weZs=ZSgn.bI"\x1@m{èM|O, y`;oQL/Z G<fS7T =-9b0fͲF{%b9dA%``aBiyyvia>߳ "zaP<{.EHj S,-@K1gcEK K,`3C~^O3olgQ|dzg\jk: qMC%p)E_|&)%I39-a[OESRu0A>q]bJ7H!W[[+Šd}Zad$z⭕ ɋ^T6=BG|6XC)p.d]8˙QJ6@ %*5akvL}SB 5ѷ^ D_K'-ƽןS[e]t#tV 5ttkRkc%IdE8J^(5gЧ hGGZj+,D쫂1jRzò(lQ kIgZ jN(qo{ [)S'ݑ7?6s1euB4*Y ENxt:$0J,:)2w0m1h4FCGlz;M+ [(s7ݾfBH&4H5>`('gk0xSNh+C`lb͚M]&ޓ+Z7ҊH s6APjℕt̂PdI^1BJA:"Xn.+"{RA,pZ@$2ta2{#qH,2-y] &6Fqr8{|0Uz54 IU|{(zR\Bj˄ P{C# S~n1LsA͝38u&enp6cȚ.r–UIƌ޵q$e_ۑ]7nq \Q-1H;!9M$ڦg8]UUuu}H.[Cp'ǐ!x^j#.yYd y|2J>g9!Z44YeH$xx7qNjbx8WXꝦ#mH2 $vR,//+mN\3o^yo晆b@}Ճ7N??IHz:ĕ 0i5{\i &^E#a$+T&ҥg*|K 2{q^8)HU5 #Hi޶]vWe}Av]ֽ6,uNڏolۇ,Tc[6|Q˫Ղd4m 5P*.X$WtQEW4=I+yxրwcAEz狲ſf^uGZȤke5GSgux,[׽Szbn'D>$RR\K,]RZ)s+)7dA.#OT8h,NL}?$Mgﯢ\׍l1~>;oيlMU3qF$`R4a0,[Rs,H1i=9d=K`D4lΠ8GE$D@T92ɃĹ_/@',w}UGo){@ѱuh&yQ&}Z(,1 W+v̲ZE>{vH[Z;kȡ+8D,XLk-  FFב"BJ,~ˆ!^Db ːqRr}&(@g8ڛ8[:M>a->l9 fvTT{P#0L)hOOfڄdaQ&k FE1@f8 ܐdjEg+j#@JBji&hl컾_ܯx6i/ 0oN/]>4^;I-*..ʨ~Zr<2%Ŵy}SiNΚmp#yR ˩QKnrVQxR(6 N)[u\("c{BN)*ڔ%/ b,` dsVFzε"_Iľ-colQˣ7қ-mfj e϶Pl•DoyAšۋM8Ӻӫi\욬g%>6Β%`ԅHTLVJL^20Y$FŔVfEP=!)ٔQ oǬB9T䈙}|<Y_oxnb{bjc@*͵= Nbp6{#䨅ȸC!JHȹ S(x6k>"!WEG$HȢHdR։Er|{½s=l%]C1b{ZDճETxkJ@9.Z >αsD~[!X2 @BM$LD.v&BD#yBU!]s7q9GzՁ^(>:{%Eӳ]xk2B QtVHDQ[ĬLL^>.>]{BfuLY#~sdFf=ޟw~|G)g?ߏjkP.1J:/H4 'XcT+SmZR358w"ӈ=R"NRVF.828@8@0 +2eTI:V1d88OIɺwt!A(2Y~sqpdj7:c5B(T$%:yA( ~ _EE}*]o2tG73sosO=Uݯܞ>Kn|g]J=$i`%מ&[M~1䧦JNNsPM6j=q/jȕ fayCʔfl(Mx֖`/K~~O}y+kz`zxu8>;x-|e,9Deo!К(Q*`vF~ {0I}uˑ@X*B˕W_Gq!GOFԁ½Seۛ`+>gTmy`5`\"/z,DOk %s,G1b1FnYlr&@ Iq0 !**h2!+PI<:"H$ke`%e:+,ĔwD~MJT:%8ai1ע5 >+|KN*2G?,ѱa<Ї&:uK浽\x^cNNI d0ʆQ.jrfgժkmNy9 19QoBVwW%F6!+ g^fkhˏ7\<c+Ty1⣳W4V;K_WK-)fDg3\6h'2eWˁ^96t e'Z+Dv$ǥ5M'jb6Iq_32Dc|[}?Q`r/38p ?_ےK`,U_hƓr`Z#Tn誡϶Q{sR?<__[~_oߜraO_7uO^fi:E!8& GMдijoٴj9׋ߦ]g;۝v!^ƄXl@LWv4/@xjߕmفF6$?jDW#I0)kH,uyIR6H Ҟ ӭa0~:ذ4.u=cxmϋِ-GF>c4$]8\K,JIhayu<3r_6'v:|#Oy;UyuggSyVʝP gMd)mM@ܝ6p힋t7ZPՇ켚Ul1N_Nշv[s$lg2J{Я_nGzCj9Lds89G2 y %l)IN|}-%C/syx>LX- jpjF rTUMXA݅շ柣wQ) +lvk5<K?.,܃p|[ͯ`}ojBv.LH @&_m8N`e(jkrR N1&m8ÄH1?E"}ܾ$}(KGw%Z`})~3k6EcQ\fk95\OKL)[,+0ى@ف HeAN,>Ҳ!]H f> 9eX![FhneMg성ꗶNL)z3oQ&my|9EPTOtr<UjLqJ 8}Tx~y6 OS0,5dkE@$U. Yͷ]%烤Xߏ.LOm{~ S)TOr( [zلn~i [J0pg-exIu`ڒH"U*G#`)E#,@Bf'r" ΋ $,Kg^pAVaj~7`okt1nJBާ׵{*SSwe|t嵮(tLlԗ5]Β7Ɑ}|ٻ6eWzJ.^sMW>nQÙw%ׯZsjk9yVcm~ i}~ $;$ҘW@.lWUmWJ ŕĔ3C*%hgU"]WZUReW i1׮·UZB-bdƓ,fE^8=mW [`Z?@uゕ, QE41,0]b?[c+"8ە%뾬t.(*ٞ&qS}/?mo!=i^<[ s=D 0'i <+'DqC ȱT"$\sb{yCI< p4Z>R|[i6jbZ`΢^a_4Su`d3e_CH%50@ ' fa=DT ɩ1(Wj թ: w}c0_ '(h󢘜G#FT_{WC VrLN?|go1uB)1 sj˩mn+E .coVo{ _7XY=z)5f׌Q^3k(y"f׼3kFy(5f׌Q^3kFy(5f׌Q^3kFyg:5fל s1@&dgbtgRo{~r! =1 kYΠ:RgP JA3(u6 7`2)Π:RgP JA3(uΠ:RgP JA3(uΠ:{q- rʤqCSl$bDPBy YP &CQ|*z-98i"%Racq FC4p˃$v29Q}{0Ts%]]Ѐ0Bz#BbNYZx iSV)4FƐ&|RiBH"KJ9QuS2HFuҌ%lX\t'3E4^JJ80'%|gAta@$# ȹ_0 $I`4juء{d_f9E y/5\>2e:X0R Z+,w "f`w4:u4{eq*hRh9I<xzY*z$jcVPZ(`!cSG(4+%O''Ɲur-!?73KzuJFNHk*`alF> 1p;ʴaYKxvU !)C kˮo' _Ȏ.xb.FqD ^-)1 'Q"_1KD9 k.u0K+do\dЃ%:&! &ʬ)z/?] Dݠ~q!)\E' ?!\k^2 Jזl4C<8c0㴰</Knǫk` v|j^|P$J6"E$phC;*^qJfd:fddp|rpz/۾9Ps՛&fs!zjy2@#&sͣT$4.>$'#͖^[4^A e,:='|&j9uSt%:׽OzMՂff ?Ҕ㾟>4Ռ'LbqczK1Hg>Wq|7qSEF!Z$ >S H'>/0S;8<=RTݻՆTL_ z4NoôԍX%c?V݆{B1'X}(@k$J^LMu|/W"GśUUN*n8eǛy,|%xa?$La鬌GK8܌2XCtG+}O"c*,еEWEb?&^K t86U%37hm-̝dxZIIy{}Ա$$)bNIzza\I={2\X726]\T2~mڽ ̫IW^-{=)_eMꪰh][h85߈XAyUؙS4m y?4˞H> jh1=:猅iK؇*l#vRӽ/^6Uv? b<+MbsUCB{W!_^ ײ,)397 x27{j\mq iFemnQdV*BIYgƯƼMINeJsq2Km^;zƉ. ȵ[ܽkKh;#[" 4xX(r,puRFf~7粓- @e_^jGQD}ga-IlAYN'c੡"F't\S=C95s^'B/@Oʭ3lm< jFpX/zYå'J L8A ~Vŷ6){i'~J- ęk@;A  7y {cNJŵqy>(ˍ%?>ma hby=jMWMX:ZYK0X`"k"QK`QE57&9NxQu|$+h fY%_)W0OoSh L&k\*nq]3qҁec`\bҥVpba2l|KwJ ZK!zƌƁZ1b@3WL#,8ZIdH[[3Av\~9= Э4g]OA KL+ԾEn>zϤ)Eޘ7+zBɕeNF]ɴ AT`U(U2xi2$7z-oڣЏyZY~ #dWNi#RP RadtSӜePi, <39fe' 3o>%U`:`}j7xߝ2utz⽛c֎ԎԊڑGs6][k;P`ak V4(GRbJ)hrU(-Lӹ*\Ea!FD$t+Z ˁjL/A)0 yyOL? !""XN1##$(^Ke3(&  6b4oS Kb<]"W- 7phW[leߨ7I ?TrfpEJ>{ׁØ&hUrmrRI#K.&BQVY+ ;Ԟ <݇-Ո TG]IC&Q`yV:"JqTQ& dZ)"ȴD hZG쌜uD Mbro.Sk^~w;[׈',Zռ|dTL9~²ǵ&du_$lqX\ƞp|߼zBoY}Trn)EE@(mp&o_!g~4^[~MP]…[@`*nЦUoy:xٰh<J6g+ۇ58py֫b-gM/ƃI-|{ׯ @,W0Eڞ h{,{^u_j,>/f Oh}~3 Jh5ٲ`^ZH+Blyiwr,.7iK"Ͷ$qeݖDْ?g(*Za޵5q#˞#PNvUɞuR WD*"1PEZ,qƠCw/I[fL%d"'@bvdgE&֣ rm%@DAȆ'!]@BCUFd5:6o;#gy;-q\_"Rҏ<zcH ̂T^Xi;c0E,ZE"'c+$M|Cf4YclP(zm $BfQg۪x_|pzmUMm]2^`D0V(sJ,dIBIbqrvHJa$b0"L2E2vcVۖ|B17BP,ӶΠ3^b#z7jw/?O„o~;?GK{Dc\ k>Ȝ1I^4@)JiCR;Q\P/PG@i hݩ*芗.1vD !$EٙAN1u^+o~YңUY< p|~1ӄs凝ci?ICޖ.>+T`ÍgJ^_}&}ՁV}ߍiIӲſQuFIJFbҍiuXx*5VnQӪrX%yn2xV))f66kFTE͛Z! HqvێsO颓Ƴh;Mɮ6; ݬSƧ{j۽{.׫fX>V^!JLbSS`DPЖ4=Rb"ѨCyY)Lv9>lw%5XlL")6FSw9%^=p\k̯|hrU%6EkK>`W-,jCا8ib&2OELe[2&riQ1L8jPn_"nǹN%DB#k-'bۉ2 Fptd)%BeÉ5ޜ. ,SP!E2"RrT `(Hӱ:9kB6^Gݲ?n({HR!GL-h϶OڄdQ)Zxb0W0,G"V+ƨ!#Ĭ Ih4# 95\L0o.ZڽQ_On4ZyEۃjI˨nZaTh|1OۂaWGNxRu G3^R] W{sG IeD6I@90rQd$)2K%ŠT iMc T\l}'[b*0b[IƮ3rHLW ;M;B'9Eun㫪/)=غV>?gg?_?+#€HzkWcQFy< Q qݞbŢ5Tc')0&FvHDm;a]eL;2"g5b -ۂڝqǶ6Q{>F,Ҡ=rų>†Z\f)e) SXQBlц| Vb 5CPlEGLLYD#3kXqd|HE'٨>xv ;MQuj{D|R .ZY@9(D $JlݦkX`j`tDJ3!hU6a1%-'PQuYif*\K6E1.=.q(=F ,:jK6J&|֥̪Af IXL baxŧmθcS#Wzw| 0I }3E?Jcܡ`kwKvGqxT*7x < ?N5N!TlŸ墋mB26(ЩM(qOf-n{7Dq}ܶvJY ! q6tB dȳi+QA'$ub{X[%8'h6OoyѼ omV~gWy}\`2#`  PǔՙBmEDrIɋ$RRuw+@))5V8&J}J͂h/YFPJDDYO9xuM+o[ͅJǂN R9^iEz |ӢӌݴYZ?zvL62];W^4`Qr8u4<Nzoub3ݳHH ]Ǔ[YKy4 {ËJ9 K)1Mc_}l{~_,YauS">_gYJiu1 \A5,7l&;c;߅:6٘Ԩt!V1/w<]iҷ/ZݮmSK6P H #TD.*"k\S>2$suE-Ds)EBE[ym@٤,#&$b f>M&{+c,솕~lxK7?P[izyu=C툗Ou)uP%r7#*%I8K(8g^Ll E 8RE)8R$U`><(88!hј fzm5"{\V2ϵɏ&l\5kvNn LǕ`??Ͻ ߂H{GlgMעpaQĵ!gϿ;5s>S̿_>qZk__Q/Xލ9MMnig7Wo{!B='Ytn]Ho0tUA|por5oHDKGRΗ #c'fYޱЦ<oźgg/zt=f?~`rONGMr٨ʐ(dTbLF/}5ʽaU8d 'i>t#Z@>}匿Oo(TϳmdڼuL^mu69A4EMnnϰKWP߽}y[:1zQ}|[_ m7iiic>) q=4OЖqzT1oMXv_/*?%HfS 61$kSʚ'jdi!tt an(9*RAL`Htd 9lcQ\f)QC'fJRQcd3|JFgQ"Gt\Ef1:I}F/|xPM I tmvVHVD;ÅV0Z@0C.Kf>ӡO+5_#c06mռt<{2i m:e]b\Uev1lNeN\,%2S)a*ecYJ̅#D L`}f2<&Ξ|M}wn6 :MV * FIF,KTRNd%E5q(^g̷z}) {$]'"]R ukQkJGΠu\&$}8U >jS<)w(*X}b>*RğgVng+!Չ Y/ΨD*Os.E}s17_/a2wSNii;n變T2TYk^~{fŧ &pgzE$H6z, O%NtWc~1<:^3<`̓7 =Hm|(AijSྐྵW{}Z%UHKDx_yJ'vҶD!Km닂>iZ7JWrUu_u}uՓ42`.,BG5;?]ÑtnW?:k_w,R}b]գb'wјv3}kZ&]gB{XOS׭!jx2FO͡ɭw|u1ݾ7˶211fòA&aID΍@,c2$G̝ RqXzB*3:(G2khS^s `\Jɨb =ڍF++0fPM=N9{}O>+۪3 (B'*Z [ ]ݬ"އ_ͣ̊Q>Ɖ4#O<178R@Gx&e-8-VE8Zgop=^ù6a׹7z D[qz`j'Jzj8#գ2ic߱8FI hGLXSr:hGGF#J\UӐ1ReYhIgFU5&.J\{EcAvw>?~4KO5&RJ笄"'ytzRZE]E%5IVQBMKzbyDm vG(ـoG~s"~cbM^&'n:2qWUQNtEjnOteRZ7sMtWoQ1qԘjOK iAmr"6p' .K?)01 d Rqv&yfA-RVjͅ|,keMV@Dae2צMiB7,J'm9rPlʖD'/>oV𓹄n Au} *81cCgmUggB32l"'tK2fLTq2ˀ!K+Y.zY`V2hu-g*"g` ZSB($ =x@RdbY9:Ue]&Ξ!Ȩ)zXD*yc$[iv",З,r,EcB`.`ڄ!uzłf. D-sn aI7da"U`ى}֑[NTaIy^ Y1 1u 0499dZH'ԍKI.-94 M:,QcJ4Pֵ#.>U܀[A;4@NHG^⋱U3}Y*8(ϰAaKO_aNg_}gGȉ{f5 6*t*Id: q,GBNF c*ǬO;8e7ޕ|#A?3( r&8BA!A}l"U]͗o~Y#N{!hh!#ieGEO b0'wƈ1nwV:҈gږH [օ~؝у֎=b:G !!if6.jQD0FTW婓UYn1!)#NuОIsW)j/\B <8DN< ;D^y@bIKtHo@ίoWJ՞/1?~zq,QUEJ=nKwv {/bR8w8U-Ow~^{2mϵzQŌQ *eTG]3԰k58Ö{EWƊX1#"xɕ'6K $8YP2#j\N[qh5-N J'iÙ"McgQ[>S?-Tlݙϗġ}fP5脮&A:#{fuNsFճdZ8tvkEK>W &CRmJmV.};f]-:}5}~./8;R k6:rdQ>yN͠IsFEo\mڣAG-D]14)s.©`P(x m| B!WEGHE-2FojH,S{O78qWˆ}Q̈jdđN%M9׆;pъL)'&|B c5F6$b ~kXP 0AdgB ѪFf#zB WQwt.cWji\CZmˋg^G^|r6 / hAggu֑0#X2J2qk!Px ^ 6:C3.Fo|̑[W/9 XD=nOhj!` z8km)5l5e,X#ljqǶ#%&_ˡ\5=7Լh!Ä0y _NnK>Mpe|y٥2 Hfm]xմi(kRu=G?C??\<ǎ4y ӕ;y*޿ZŇkr"`6ƱVGD s8u(2Q ^rrZ̯r6]Z7j2}?2he\*)ewr>uB)s3xafW^[KiYYJQަA  GnȪ}y~ݺ:$>]{jnAkKHfDLvdiTh,y)s҇peL؝}pDe*sUW*2Wb&h*xgn>Ϊc\_&F[ZXַ}S*ErS8&+)E|2B瘚>&ЎĎtLm/,S sY;p6^yV4 `LigƼ'z{r}UoبytO(?Y㚣QǔlU^s,Yb vR׈Q1%e2J*+Ga4$)4x6 PDD)j}c v#^eUfߘ:ZvܺhK6oThMHv>KowO(`ykIe;d);w qg wxAtEACW׈R:3tB|Jk-e"ܕBWVlEtu>tVp% +, ++&J90SB c9ҕʺ֮3+$R%ؑΐPW Y"up%RЊ]ʖG0ЕuBؒ"` \S3hsC+B)G:GrlQKXbb;<ekbJn]/sVssj=q npͳ|MW;$t#ʁet;]-jꡭjHl{!I vjUpsVJEUnvVx0(6u\Kx E,U΀CFo|!,Hc`ˊW1wD`g1$yzgGJ+DNteG-m+ڔDWتrW ]Z;xu(WpΑzV]!`^BRО(ʎ(  Js +lt1tEp*wK+B)ǵs+Ý%m#` 5"NWRq>ҕ-"uEh࣏#]!]9LAAt9+gbv  Yҕ}+w8gA#\_hpMЯMf.N޼cw?ή~r{9rT./[&v^kE_8^ɠlo_=CNuğs{RzjSCO^v%Ӣ-j0koE&Lq^Of%(݆8Vu=-q;ۮfq4-#rrW&WR9d++/'ktte6jqyר͎=ZǽSPgQ>?ޟݢR=Ծ5N?XP[.g A pWB0h}<x#xV'1-c5etEO8/Y8YF䮖C}26XIl ︤/0N<_N-cΨ\N$Z%N$JV8wc]O!;`ܣ;`?ή~ÜJҲ ޢh)z- ۭAWxuGT/gquJsz.rre#M^X˿`vy;<}><Z<} !9 b^ 9臛OWGecϧc]ZuWO+߭[%i"MiIT{ 6ɈdkrS$XsM7.ы3K!\%d)jj9t5F(1n d~Kpv%*!loCѧk?n~vj 0E^T]U8ͪ0?G .1q7z@=hpӾ!{^)Hreml*4S_9ÌִVTճЕm@(,ZU+ѱVvթj7Z{" a+]z"uAt-7Ճ+BiG:K2VJ>C|Pq^#%fVeJpEy7۩*ubOv'=1(JR{^rgXACP1W Z1f|P 3j3R;xAtE8\CWx)tEh(bGzRFh +e1tEp(NW鑮ΐhMAtUAtEpm1 Έ:]J%G:CҖ9[]iK+[u+DGg,XE]`(g;[ ]!Zle|:^N.SC+Dɹ 9pϚP"+4"Nr8]Jڿ^^0g>Q .?qО*n(t%Gڷ9-v7U DεrcwjvxuLvkQc[cZirdXaN.ݱKwv̱;^cr"+)ugRw&g xg+u{gs5zggIkW ]\kJ+D e|J1#,z}] ]Z;xu( (h$B e1tEp(RRJs.=|BvEp+- $cdYҕᆙ0W"lDWV3t"CYT BRvEpBW ^]!J%HWgHWNH%FCWtEh %ؑΑiK+l d@b ;xBCK@5qA0b| 1nbQJL׼|NDcӚ ~-/y7o,lsijqߛhd5"z/M:A vI͇?tz܇i܈xa-c>߾wmqIW~))/]` yZvc-ߓMI*VXfX%6OfF8Qj=99ahǷXCWyz ]}*3n$~'_MJd=~o~J㛯v;r4Bzǯ3U[:w/h@Ey"Au~Ĵ ||1+y i8}sC8ܥ7 H6my?D۽I[2z(ZBo[ds$oUn:[w݇}~oķ;W ~GᮢVߤ~&  Nk.%ozEZ虁X嘜5}v:)gIq $]] Λ[5s49eҡYUcf)ԩXRUmSՔjMxcYUckdkg;'qVUE@ P0Qʌ`a|{L*kQcN 1h)L $Zn-5Atlbl5Qès6E=ggbjCl߽pI5K;WjgSZ+jkPR1E4T9вT=b.;$35#bz4*KNIykOpGDC2K/}3C֕6fU3A(eXDe9w,F!3; !FV9j{ǘ+ oTh}GJGQ"'@chE鑤.]mg_YhSH{A1#3X2|șbo|\S>ޟ7g!UEj75:;s+V[uރz9K`w:rÒ|XvoQkNҩYJEWRHqu~0 m‚ڈnIVCIaQU-񸮌2lk6"kseUY'].[ * 搌7)Mɂ13g"S ]0άVikCuYwiJz[(.V΁ |$0`3jSNc{5PQAm>` Z y9xCyGph(QJ[uPyU糯c*e$XLytp F_X$A.Cqaj YZW&+3\5˶ " ,XEi٘p#*T)Jp\;wMBAQV1*RO98.H1^l<5]6Ajc4)*@F&0 β$ AQ)tJ_%Cez|=rg3 du*zbO`C@+6CA dܠѦ!P C$X !a@YP.34vҾSDUtFtѷl : 9 & Bl  ` 5R(pPgU ]p(lL6v_ J 7PЦb:S %t4GQF̒qW`D)Z6NBRFCƺCw3J1 ETc܋.ǭHR}"r 9lABb~)Sl5J.!pƮ $Y*k hPM41 =IB}faP.+ŸKUUg:E' 7:*2Ik4B.}عdⅩ}4NkN~[_ r!]#5bx:p< mJ_`"8+$_{HV*#d`1@M&jZ%T^SehRȆZCJcqy L91/t6@2&@vTm /} ,:^TBN5~T}LUS  d-ЗBةy&B%N~ o.N㐋=`dd8 ]{ #AKzu9Ǡ-C^, @oBD c @ azu5c0M"<ыV !nKv%)5cEXA5DCWHP\ᚡvm!E5K(+{vɡ `9ZA" td!9v`Qm BgQ*j(;ɀ!JP GGDޙU ʰ*)᠛FeR];%VKAq@ `8Cb:4I1FFЙ4 jݸVU.m i ovAֱ@Yt_lzT ƅ!W[uK>σegtj?tL;ݞ#љ$7X-lܼ1ugllfѓEFظ:4%B*١b1j0q5M/ y(Y=14f&fc-,a"[/*3bO&V9nHJ4𐗨-a]M1WTsC<݈VܡfhRʕT*!:!%, 3P_-[F3 +EkMkׄ>Hg9B~IQ^ bp cF[TcD9Q:{$zuK[`rOuHMuc@sZv@[dnV4HkѪUVg(iIc@LЁX2BCz~fը˞ vPӸs ;[*އY;tڠ@\@l6itv6vt7^]x0v?|u\,8FqclkHoׯW Ih~=c[{vH.Nj8T:fQ WC|b ~ӚbA.6^]lZgJ/|N`74'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qqMNAj=N y&P;䉋' EqH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8{*=뿹aapNE~'P~^N z h' 'G9 @_aj $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8^Țh@vz@˴'F@%ъ%:щH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 r@ZoPZ~8[MǨW Sw7e|#tk2.]q pɮƸq (}K0.}1"G-EW_ ] ?qmd/t)-N# -U(O0bd[ |/G{`܉'nE6[XVv7i=)h:!{b 5Kk4| ㎊nDڇa]:y!oG'r+e'pMs=Gw ^7*4x^_V޴va] ebli?^c\c|U:k0x^=ޞDu]-W͞m Ŋ*Urif"TB0hp<i 36Oj4ƀkWs/1[~/GcX?o++Ѿ!nX ]VϞJy+e͊ Հa-tE65P(tʱ1J`WCWkj@k 灒^"]yvפWt9@=] bVsfMtsjuj-t5?w[VCW!XRaEt5pY ]3^] ^ ]+CW1ZfɁkk@yABWk+wϥwTT7[jo > -oCWCnrBW]z!ܺ2u8cY(P~i)L 8X{B}8m?\)|:n]_Oر OATS Ĺ^6H"OCN:*~ťǜYd觘j#DR񓘗n9kmEChZ~1IFzˤzˢG -%[,&8=dxuԹ=gwewb/)$d2ԝAf&#o( ^Tڑt;KOf\ rD%;ΈU:w\BW @mRꞮ!]18+ Cg*+t*vt ]+, ]%3*UT(b=]C0J!U+HW*Ut%@\!;+"]RW + ]%Dʕ}=]R`wDLpqgbW HeOW4w)<V3tp ]%J{z3t%vzqt`}E|{DC{ZCZFWb=]`FD*3tJJhj;]JPOWu⋲]޺xPƫa\3BƔ"y֌nB&C#>yg0vFc$]c$m 8U4%Ct5QUB+Q*TwHWR+, ]%>Oj/>P;+N]XvňtZNW %=]CT2%g03t Jh5i;]J޶=] ]I*4GW 0N=+tjv {+ԗC_.ΗUBP{+͸T 5;L(E ;vG$`0J(jn4nii UFWRPmiL{)xB;!m/۔+|om uf&r;?b}'+"6S̓LA 01KH5?Դk,/&qې4vy}r;XZK 8=tsӟay/;s_ZdA?,E1o8x{M#bL|]-]GA}&Oըwڗ~*+ C`yǿ,qQz-BI4)%B;lP:.<;[Wld6};[> FV!jYTq>& ]f7NI$?|NBٸX fQ|#}zi8?Uoa63ld6(>ߘg<'_^K's]˜q nL,U6i(o(:l< ``@&$#(RѹnD*ՊAXGt+qUd8p#;!4T{Ncg*$f)^FcD2Y+냉teDDL4hFQ4`pHnX5gGfoʔZyGL]l eP([)zHs)Qx*)XF%T0p1:% %Ia`,-.Qc66hO@ﰿUKǫNuN׉.Ӎ;]/.{_{_qit?ayփQ僧7!^ އ{UL,4.rJ|Ϲ-G_~Q"IjwrgՄ2 C8}JB@Hm#1 \@SJx->cUfuNI L&Qϗa%qc_ q/d0rirg%<} RA=ъtwkY]c~(!8"i2ܢ5eP z9+&'btz;˨0vɶ %jKZQfQRRE4D -ug_KTW%\}_B)u.# ܼMSm9\,o#8PPnƢ\dNn{O/ f6I<Q$p1~2ӎ" fansqu9+(g!$uFx\NXEPP"ǔ2y$F"%9gIhIgE]{{Ë1@x3[{>j4mX|Ƣ/r?^$ lOyI#,-g`ld 6([L#",(l2zb81S*1\9΃2q/ ؽD*0ɠ`(ʤsLld ƈxAXG%f%{iy\7nQ:BRshzs3.Oօn\V#GܚIA!#șR\|-P"; dq0c T  jIӺh'|ŝqo*}炟l??MeY 4 ^JVquIN r,ɳĔB@P*~ϥ-:M9sT0WW9$mvv}.Ւ\ڮT'e0OZ[ X _0M6<Y元 Ra}AIO2 E^WwNK#W'ߟn;jx\jzQit0I5FR 5:9Kſc5 mt  #B}#:?zV~ oy9,,"*2b#nկjWyEyV?^do ,&jqjÕl<^jlKZ[W+Zh6^ZY\Ws0׃l>i!{wJ_@yGvMoh#$*:mblPX탲e=κ]%Iu6IMTGr@E,ǸprgɄ1pt!xpgثQYv RHD$Jg9!LHUFd$xFuooT/˦.VWjP:* "45uq [Z^C;z}7Ks,=􃲨{+F-1'fdT9se1h\B}R: dSg&DbAGB$Ь)*xm FPhd݆XR=Jo<Ћ1sp |Q,%2PNd!b֚Ea2& {ؤƯPGv!EbZ 55y0`ߩt$VٕެvʍYjwWc!etNX$b7NXI.AM &gYI=1]E~_6}K#s m;A݋gڽt\N4Eت>@{V: j',Y-B֯)z%>}P$N q}j/ڸkB>VvR4b6E8JxgܺnVN)4"Y !)ƣ;!Θ0I^DΙp{.{Ɏh#"LAH#)DfW!)ۅ~Ԏ\rf!vQ|m]eựݟ@~؛f*1{icQ$'sjޔ~` ک}"M{9qޟ|ۻnnỸn O?Q+&0Y6c{r&ճFDz> :΍s{ʓ᧣@`QIA$+1|Gy"~^>Fu^cy寯ڄ\*Jw)[boY>eɭT\jZƟq8>r.!KdLaV1pϞρ|2(nѩ$KwZ ,9vゑ;u$곈KߒW@,,r2Cp\)"br +Jt0F ;E.a s->l9/7dTZ{P#0L)FO%fڄdaQ&k FE1@fM8 djEg+wZ7"*ZZ $0M93*LKK7:+ﮫ{$Fqe l'w;ס5.G.QRDLi0"\|{|8F^_R~Qc)B)I'p+$o\d HppU2 9L@jOJ,yic9^) N6gE*n\+D[2F{*VƦP, ^>,\RT6dܑ\ٸ\M|d|||Xb gI^a(h SRX+g`L^2I0Y$FŔVfEP=!)ؔQ mǬB9T䈙}>苜%v 5sWvocWK=3N I fOx wHkM;cZn|H|Z}A&2s&OU"Z& tX]2yj2T.K@X*B˕W_G"dFԅ >cWi|y {~|krJ ',8.uJ3կxSa|܏S %~H yjYP]LgRCn(㒉Kvx FJ.㡟L"_N$/{a*[{A K,#R{Inqēe9<=)+B>r1 Si>4ީwxU]F4O4b{:>^eZ_I,8`AOQ5s9]3gDdJ߄Ս>͟Njkl\Tw^57M^^fkzI̅c0ӳvl[5H5 !><}Io98|Aj %׷tՌl 6p'2EW\ wt vwNvWl$w5-FX>=`5UG_ըu̯@A\ R~0}7J.W41LJW*:Tq}LwtJg]1_HH|Wߗo=׏:7'\ؓͫoIeTa &mpLlAGMOдijoѴMsgve]h[ R+/R#Bx[6 &K9ҍL <5'@}MkN&$?GWn#I0)kH,uyIR6H Ҟ ӣa0ٰ.u={mmȖ##1TIsC.]K,J$qx,mRLY56Zl[oMco1룴zDmC<6TH1q$&cp.fLQlMr:p PS7Ln` .aX=a)mv*[P Ք+ i 9o 7R޽軦 Gоέp[%W#,)%?-շtc({>ou vOk5ҩlH뙐jWO`r-/GTfbҦY元狃sykO2 ŀ.v|t_'\ҰƸN,3\H˰ޡЧj+k+#7_ٖ*(`qȇ|:K"N署ƖGHmI꧋l`bjvy;׻E['бqes4ٴp3Nj_/mm QS(jX jAx 8wz96m{W,ݶEՕ=[k٠,q ԯA+[c̓ڮY8wz]k\-Wk.:V#'7s//,3D :Z9TH3m$uּgY-Ϗן}g ~"i Y#XeR&/ƐܰJY[VTvE۬.j;{É魃iNʕ]O']̽9?Y7/m.3ɗ][y6&|?HV~v{.Ojd}Q{ƣB7n;cϏyEON/i_,2^ P`A.ŀTʋ4J0L/@(+Fp+(Ctu+&bte,hō]WJ&]$/HW ֆ]Wte=ow]eIW%])].RteaZRoFWeĵn}j0컫}n{n MnO!]ɤV=F! Џ'>wM`sZs,;=brm@//N!ӝrt'@[5z@-|$NnvBZX h=]ٙ?~*wC[T5".se~PȭkgxogYzZv9 ])0q,FWѕ2ue'](qz1R*GW]mJ)ݤ$ ҕKhweSA*(1hѕ"+o %O!J TқApc1o[(f#N:]%HQJ,B 4PLveBcוR&oGWi˪OG.E޻軾_WpSܯѢS0J4@Wic$]pJJqE+%rmUq=v4Y-.%Hk/ 5}/O_&ښV G̠&Vse]%XiCڰ[u"T;!-GojvCwNrkC:zObk`|ZO%\ q~m߇_w e ٧G*|H8c!5T]ݐc';q$>L3tǤloI|ۗ[i4۩ufD=^uK#Xrѕ/EWJ>{2J']EĒ}0\qʖѿ3JC{ ҕspJqJi}R$<u<=O٧P+ Ǯ+tu>I*IW b9}W (&]-(ʀC9ٕzϥhmeLPW 8ծaFK3hajftap5 À#yEn]ii?}W)G+jq줫T=teѕ"p)2Z(ךHWQB\?)'Ԑ^ߚbdSQrntرهLpTv  1 7c(il@ʑr1|@p%ʀ/FWcwҕR2IW+Keʮ}(FWT6ueI&]+HW +%_6Q~uKʮ{(FW4VFTJTIW/S()2`.gbFsw +&]$X X\1R)2ZǮ+u+)2`)'R\(EWFF?X(ivt[V=xgW5(anI LHF]]ᤫV=Ƽɰ" #5ᾂt`2mQFj6i?&dt` SYA2wo4+Ffǎ}:7ΊR$:9=AJicoeuv3'I$]p*3IqB)2ZOcוQ2N:@]A\AR`aJѕ:LcוQmB(HW Qѕbte4ƠQC'm JAR\ ׹Rte4ƠQ1x+ʀ%+ŕ}/r9֍1h4eW+$])ptP)R2v]`IW+DpC1oVueifmYбa{7ei΁4=]IWzDN#w58+u\i28 u%j-nI'rZ4餜 Z-0t x(b7MEپcp19~PJ1s r1%(+uvH4uENSY.HWp+eHcוQbtubT+4a캚SN:D]@P z?f.+e?v])#tu]W]pbteї+MRJu!J<ծ4rF-ue&]Jҕ4}OEF+ϮrlSQ&]+e# |yjK,p{/aaOSQQ]IWz7Xc9R\XVu~l Oz]EY_y[0 )Xo9f^\p0ͦ)hAq[7tӾguh@uV|b`^٫Fuq~Wued-@jb+6n94 ˮ36ɇǺ%pxVܛo^l*WsѾyuCt\py?֣kgu3+y,؇/?hu{y|]ךr@6aX5>*U\b7y}#oCo%:?3p!F1zqvtw +#g?~l1Wd^3WVDVzx(lU M [acdx N^tQ]ui e)EWF+8v]y:L:<]sR8:_ \)26JqPW"\RcPbte9ApcוQ M:@]0t>we])-я1ʱm=Et rA2(Jq30hڍq̇+ "+NPNcp]1}WFKϮ2N:D]Kҕrt\Fnqz3芶z:A/$0042]]ѤV[7,-| 4 8P̳Vqb)Z\(g`a HGli#c/zOiMnR4mɏ]J)'M} ]Ӄhueq9]}a|ƒ«mbcfcJ&r2En-yo޼YxhtڝiPRzP~e5XxF(|wul:FWoϹ>n \|\ٿwp>`Yb X[}6/4q:Y?+_ d.yY,sKNuqwo~oyí^F)W+qO755ݼ0T]Zs~]XZ/T$_ϟޏ,q qQku|O*L =-?1f3J>>o.Otb7~.;u#; ͌1ԉlA2'HO^]_߮HNF^プeɕ>܏.]6ſUZ Crp} aDuTnM7ٻ6fu3'䀰$Zu=cru Ц:^B̍[gc1uֻ-R yMy-$#|b?.40RCӶZ HZ.ajc`;pvcL]h3qi)  OI.wFS׵ZKnD:-Fbo1uN>'C]HV|Jrs~vTbv>Ğ"N}wwHS{RCk"Bhͬ;s^_7Z3:BjNR9Cm h`DnU_Ew"c5}-F-{Pmp}y&jtתRÀDӊֻ}n^bߤ>fA~")Q"; 2z:}4[m}CP CW]}v)}$^nNK;s)KɒǺ>>^kY+o oMڴd?%,Q#%*A)j~I!ZvSD6j&hi#O?sMㅠYkHzĜ}}lSwn^Y:" V';}8}:F{P}l;}5NjAFS7 SH|JX>[@<`[YT4Ns A4T4om8%I@4Luײfyڥ%=6=if mz)4&X)2.wC1A뺍Vg֍P:Q`mDA=8Cic6Ӱm[ YVhh4ǶקwV3(ʱkT>+>U)&O uRo)pswN#I;Fj?ٻ6,WcmZ@0dXL0|q0W\ӤFE۔(AǴPfwTխs  ʄWs{ d> A2&dP **ӉPuj,K8JH&{J >-V(!Ȯ܁pYV ++5 e2aݛom0K/:T,Lh=4F[(2#χPzQ<t/XY;C7a?%zj),hB̚` UT GfұB8*VZ hM e*3[!(@Hq{A(xsԞ5ƒ#%d~NҿjA!NMFAꕶCu2R".5>() ElD& 1r 9 VDlZ"5ȲW"ZJ( 6A j"b C$CuH*h8U+cКeM/dPgm0(tr@wvۋq)tŬ7ICQŘ@QE!i$% !pB6}?*<`ĆoVv=Hh>Dfxs:p~~ttU2Mt%ڇAhRU! 0阊'8$;ŗ9*XqAyjN$\Pd"U̫ fMʴL5ct\= KH`Ge YLZ[]x $nm }æOh2,Y_"bΊf lZA]֊ANE"oߞn\nQ~0di"YrO&XV.B&ȈG!A]Jr 8j"!%:]%@_>cLG>k ) 1E'"Vң,I£j4ZIP*HxޢZM/-zrDu,e4G-w$a޺-mӨoAKCj)M.1ڹϾm.]V /ҼnӮfEr$YzI`0u=IFOB=V`&ا.Nd4QGmC^k Q$I"eCkBm1&L =?6QeF쓊U <%2`ж+9]PnDVݧܢYf1uA;(XfhTYڂDe{!>֢;n֣bb ?vsV$⤩Mk/ [`z7(V1Z8RT XQTbz01zL` t@ QYj0Q3j hNmS; fnfR;5QTAlR>Cɗ U{&dd, 6#TK.dP?uN^Y%qA*DEPxZUwUY[Tڠ@~ǣNZS0h¦A3 _ 2)/$čtro*QkO^*|p*qKZkCWQ[1xx@A4DsQc6ՔW KC$`PrAvƒf5$`'T$~Bq$@Br6B!(L ΃rt ?1 ՠ]E .Zܼ\7KaKTQ USi-gKVoQ駝!ʴFY]c$쬍2қ_}Sٰ6)&OIY`7Yrs 2ݾ,索uϗM.ûf-ֳۋ`@ 4wN~X8j5_ihs.oJ3ӽj׳q:ycEnn&FZ߶UơdX%tLC+#U*95pUD(ҭ80_տ0'jwN @g2RI# N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:['2"'f4N E='ԝ@+ S@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; Numhx@kh@@;2; ; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@䘜@x@^!FZO F@Bҳ@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; t>NӍ\Zϓ'?-h)zX^v.SWuY`ј8&7'o\Jů0: {ConLo\`hq,tEhɿqPtut"]]p+kFCWօS+BUZ}btyE ɫ+B`(jDt+<%XtEph]6CW3^{b N?GVW T ҋxZte+tC/Wv<`h~}]'OWXs+ΥlN0UZԼ|3Ւ,fQ/Q'Iד]}rnmL_zVVwV^t8/mݽQ'iLwn˛9ye[_=]MEO^?^4rmB 6ccK}~9aаhb '}-H-B,}^̮)prvAg?|>ZΤ躘j3R_2?dd0<puE; wl&[uAévnp>`~nG#B&Mߙ"bm]hQٷ<Wvֈ9~w -*Uў仝Mbt2aVO]O?~zZ1s1n"uA/n;zDՙ'xTg>A5ԫ3BWggXi]Tz4tEph}ZORIs+T4bDtE ]c+BOeLWgHWFjGDWVv4tEph5'OW3+UcvExv,tEhoYztty@HAγI؋ ֊hQJ*-S?c䅵Ic^'7[f3"Ax*"X$O]bJ9J mUEN""ZWDsEtt]pPj4tEph7ӧ+B4MXH]=}"Β*+1b9\/FS ZO=EЕwV1l ]\"%/>:K .0q< p#BXNPZtutfL Èr,tEh{? ]z!r"IW}Gj?STCOuLW_:^eg񟹞{hs% 权bsw)䐀¸<JF\"*u2(w;#~4}; Nj:$|ӧ:)#;'BNxdI>J8s/yñM"+!0)zsUe1=lw={Ϟ= P$tr3s̞͙߯oHZKdſ\#/i?(5me&ޏ˿nN6O8a} '7|r7:omЇ r{ҤJw~ѯ!5o?GYN18j5_)jC}f+:z=۫v=JqisjW8-oz -ZwP+ԣ6JGJNKI}y$բQsae{dERgzJT4KHmv"ܫ~|zL|c!\wv^\S_;Ƚr=H.?d\Z\J*&IJ 36 |UFe]3:i!uQzn2T)" =IuS鮮oI#rGA,r{^$o6{?E]˻1叫ǻVǫ~w'jW/fP++9U>06)O4/ kA(2 h0{ r(d6B׽Ee U*0HkƘ$S b1tZS+X. Vjl0LB Cj@s}6nH>VիYnl #j{Di375빟;f)?1>)`(бHC^Ȼw%K銗x[#MVNj4*}nB E5@Fx A7|xX9IOE) l3 NU.P#z' 3-&i3$m[@sφ^x;XDc=zĊ\dFdUY~/^<}Q'`Ͻ1M3??Y(e@Ģ퓞O9ec~"t&OTo3gXw~dVTs}F\{v6ss~.V`;^ !"58 2á4C:E"} Hp娍ۛgbrq/fP'6k ## A9i!R,JD38 E hc9+_/ukpm=]ۛ)g?#],w~VݜZړ|g Zvl w͘~Bj132ZhWѾWFSo̍>NncU9.ыɓ09Z6y/;C_SӬȍ]O ?5s_Z(mK[BNũ Jy_Y%FGLEO:nI>r :#fz nTyhG Ns;KT=;[l~-IZr@eTFLRUP~*s<ܐ (QqHt$l},;{=d\ZJaMd&(GJ!Vgs+g=h Q)VR"icEؕZR䂩4_!߇<dhx5>gK 3`"ŬYH!aSJ] DzY&13TA fjIgv b"eRD! aci&KdZ~T!)&V bK.y,.! W :KT"K 5*1yh X1̬O!> ήkNgO["OyȘsrĖ O z} $˯K$ P1G ¨)ǖ RCbNPfHb|jnS?ۡ?izP6hhduX0i[wX+h4L4B)kM6N0*`Ty514J򠊗+[={APHދtzW7ޣ~UZCQ^vQ (g`8Z?t@D[m.zƔy]什 11rl.k !zwvlB(1a.+ Q{KRxf7tk_*>:J&6%vGk>ms.-*cufO,'gtqfN։4 پxG2>dN+c;:x)[[hx1kGeP e hG P#)4TP ɥhVU[TTG΢/h+MJMVS/`ײz|֘f.7RtA b*d 29K.xVZØ$!Ift'DFm+In+Kgn-v7oe4.^H xmڀgGudS57 1TZ/=7PGex;RA'xΠartJ)18([P#3_=;1ET-ȿDRVD%@)iB X@" !j5ZjI#O ՊW"I _ǠOA' $&9_$mmp5( shb8U>ٚ n6RTdh`%DLQіlHZDAmlf}CQfKPs(v@**2YU4q#kbh]\& hQN=X7?VȁmU hK Nm^7NEO l-t!>HԸ1䝂M,XPGd 6^LE.OݲGG)р%5L ]t& + TFi y~M=V'Ci.Ӿ?h[-A}FcQ!Hk#̌IuE'2NHY]^Jom@!60=y>y,߮ʇ|Ro<9qu;{B dt[{w?xU_Vz$KgI9N)'|ߗ?T1MÒ^C˥#cmsGNP lFvc}-+0uRd0tc2(EL=Hܘ6 4jg6JJdE!!#ztdVB/"WQJP4ue3L),C]K 7(ؕL=-'ZO*['k?4^%LIpo&3S nbWLɣ#>VfJ un51*x IPF)K7$uc3$,)c&jچVľ_Dfvq^6VY~6o%#?s3{vA dD( 9AzAVS dPJE"T#(xi:x'J,E"yiRQ{VEf8l:-}q}X=^=+FG8 @L9H!"Ar{Z(i:DS``7mlkx:>i7C@Db QO<#}K5j(^J cx:g ~5f3e+QURH(Qlju!RaEStg_)Y)%"qE B`FP\k+>|:f'[#*Tɋ%\ņ8lp$8 JX,֬qJg-!OT>s2kn8=e[h%Q[ XJAHAb6 2DZdTP$I$ Fl݇ 3׹CX=/>VT'jy|3M'碚Vɨn/76CN Ćx .U1BJ 3LL5:D{r>KQ'?_̸4n/l/f95" ?2n?>V6_+wDS̩gpeg`!&br!!?|L+J2Y1XJQ dzPit4-?~ n.s~`' ^dSYi΃Voq[MezW0w^ixfWQsX'mLP,!IOOGfgii{l|%i`lA|):,=V6xV _F)CCP%\FHŅaY8C&Q =(e#(H!A*g XHw:rvɐ${J*Uќ M{[ױ~2<{%[ts V,(gt[e磖-=Fxq%V⒂%YmmILOT)JSŽJJny/< M#BL XpǹO$^Q6XwBGK(  ,x@ Zr (1$|(>:'GKT\@s,‰JRpWNhJ`4J S 'B:^Cn9Y%) ·98t(-8AN&{^ s`"-6ɕc\XKY Q䜝b~"oMOkp]L9OXVpOINs5gP\Qűsu_>&0Ͽ)` HPWO4QXN.d0bxw'IrZBg[X\٘3 K GH[]^G,>!|Ca>W]qc+ިo*:;;]ߜn>*R r̼[X&O8i['r˜ŚFQ΋{mZ]_7kuD'?V|;Z`u$dHBQ/y7dD95w8eW9DoH}ذZ=xPq[y]$ꌎF.P"YzHDGcME *Yjⅎw:L-'Zk(?1Ǟsm7Y7z> be(FK_,IvWճ9oyug`h#6_Q7ϸ#Xm 7/Q ɲ?v}=B%+.v G 娦m1fRIùWm56|ܞV ~֑Lstǟ#K;ϑeYJz7!Tvw.UжJ)=\1@L+XUWUVJ){zpeA \eq \eie*KߡÕq 0W jq #UG:OJWz=\:0قV3p5+p*JWYJfzzps v6]a/vQ6 |Nף (`0Br9Mni)a 5L@Ŵ& 6r(w^u 0 1AYܰ~%wL4Ғ3Q^kjE4[ Bi]!o{rsV۸ƯW>kKq&_~~b2_ 8C3N=$1l|vS" q J@w8FAW8F;1oc0J(5,+rvRJ+N.,΁3 \ei_: ܼ"\eyw,0],m,-•d u V`}\equgAVvRcou:? 3p59jGi*KIWI#tvJ;WY\JvR"\f4'+tȐ!HW*Kқ*Kzeہ+ЛĈ2#0rlG=~Ҋ#]'jLW=\e \eq \@ m3duJi"+`ۧa\3TyZ8Γ”᣼sK\qn3l22XQ1ׂ͞c |鬮*؞yVOD;ONH_\Β8E\u]]s+,\;PlCb릶jw\)˖DHyM!E%R;EРvvEgN{x_+~8Zvͤ^2Qgvr6eiYKYWPfnr!jznGMw;/:Hñ8?5~=踞/v ѴTA㵿/ymD*mzZ^-g5WMJB.kEDajB߾6~}NQh2l\f2J$fO]lѬ^5Ubf媴Z Ϸ<4Gi=GrCH~}~چ5/Q#NI͗'5ދ:ɡ":nG`Ip7NGg&8CnWjZZ_j mdt7yWnwnyr-ıSz?EǺBÌC!ѿ( 0[((7Em<)%7Eh=7E(M榤Z[ ]!\""NWtҕҊq]`#+x,th6TPat5)DhhPVI 62P s(EЕaYLVxWw bКABiSR,Sx *wEhdC(It5@r`]!H=]x+Bk+DiKt\ _^Wl g==7\wV'wRˠJt6CTύTJGDWh j&b+BBO&$]YY 4:-Z Rc+C=MJ8;P mV{=;p3V@>!5< 6:".ZC+B)Sgt,c"+lDCWWECWTtE(m!ҕ1]+|5o{"\' ]HX  ]\r>eIWq 16"BW8Hi:W$."&`к+KUҕsԖ:"D<U*"wE(m]==U'1+W YŞp-)҆sNq≮zcXLKTIpF#AȇuNGvL-&8qWLz-=&:#V!Z`Gg2wΞ$:L[ ;++I&ZeC+B )4DRIS+m+k B:|P jtBVqtHWSTm?]wt l] :]Jjte5RGDW]!\ˣ  >$ ] FC<52"NW{ZЕ*Z?EGǢ+ xW2(H+χDOՋ &A/J{esS ]DWU= LLtT ]ZC+DiIt5D2u ?E%;'#'aNBfx[Q7pr͵ȍ[Q-Gc ` &$"|HD(J>} ) ?{o!X vJWRDW+J)Ƥn<Bk+DɹHt5@Jj]!`$ 4Pڔp"]-cZ#ECWEBmtE(Mt5@2Γ ECWWD ZeB+B )w5DZH]!`Y$JBW_}D(Kt5@rZ=EWǣ+lm4tEpЕz9aTµ6*^Åz&C҃d.Xk ]!\O^htjte,ќ'''#c}Ґ5(x8m,*i9N<) $5Jt$tdlLerQhV5HW*b]`NW`Еz5aʠ``alυ ~h ̻Rt]=TX-gAw+ #Z<:fMDs-E3\ck 6ϵҦvs>Or3JcaVr |/-DD *:MJM%~b+l\MJ<]Jn] 1JFCWWX JB+B!J;g1uLBWVOWR`pte(wcZb ,1Z~9P:jtea"+".h ZtDW XLt * &jt头w% %ݷb7oƣEuܫq~qV^S]W^r/h=(3^ft)!׏o!RFl sY7uhXiVЊ/|򺵰rz6fic]N]>)/[֊=7?:XeT ۡ UW󬚖˲9ΎKt%55q/Mr ι}?.>ۇeNc=3;tգjɲw^i*4R gxG7~ JKcW7\3kog ]i8Ct!WUjc&.gw )1~v|ao74wcL9^O`[ Z٫?rf'11ˏo}uԌ )*#aV5rupkOi~^g)P̊ߚOٕq]-y>vN%j9 0`wv'K!wV'gHBe}|2_~; 96tv -,NSF\| |Ze0kT4S3uD͂|r^@(1(PNΦ O\,׺Ij|N{F6蠝ܿ,>z?8#N[h~ϣ'SR!p*,L "v:ͫ]rCNΝ+*gmOw@ڳ=ZG/8g.^\N)~:s=xY-\x00K zrFX >_q_fYc0봼]^^|eTԄg Ka8޽ z &:vaRjO{~ʄފp~CWdߗjTWW5n?X˸kj0f5.jޓK $0(lǏ %@PC&JM}=2hd=͈7wrL&SQl]$Ni~v<=ON.geג:<\`sl} zSXEXYhq(o\\џYe*b5NVi$ !nq=BxX:DeWxחF ~?ތ{e9ߢ/kooi ͛|7R UUB^h+d.@R;Wkc\Qb*#Ic*N>ߡuUM VZSL*u]6r[TE^ C;|L]1຀"C|ؕyYcDo󧍵Hۨ)6_]le{?1Nߌ{r;I6vm%PJAZ 5Ϧ1O a؁uue10w:t@OJg0fb9ziZ84MGk&>h9&eU$a&1}LVN_`M4eD,0 IA/ڞTciE,>i\d>뤹ef%boMyl^e7j#d3^W<̕*]aetl8sQۺdAIܱ[W5&Gq$}?n[Nۯ~=g)iͦM3Rlfך=|轺{r?nf_ aSײdd%N, YDje܈}<<]Os/J /a's=_V0,e"&emH{S afF~Y gIt\Gw"k" 4i{I(΀;&U8q0Gwq<G9h1͒#<5k]lȑ=At,g=ʹK*{lwƓLrӒ =ɤ@D&Q^Z)IgNH'yVm Rr\Gr\E}@'09[DrF/qBl?,v< ^mɺY=4)ɢ.Hu7E0dUUu]ϼ9&rb#k(9N x$s e+ z-^f Hc$І1s 4E뷮P7SE1%n,RvtNޏ>.4Nsha|5r(Abf/޷^eR3W,;sK wo˜ƢE5{TW*]FrJHd)nd?к_{k:-;U뻌g5KCK;%k WsoZqq5BV.Mᔥy=)hMRtS}5>+)_Sg2#JX}}=g.\J]F0t:mČjb&J,&k":#ԯ3P/f/Uq(?L3An2;WߐYAJ{kՙ\`32|kTıb#_)!1DLo7'7]BQ*U(/tk z_oo6WjM,K9yM'ߏ}^YW ~Gs)K&4vW 0d+^?P^.38Y ldu"6 Բ_aoj`k S6H0v %)@Lh_ItIQQSx4)|P]Ӕj {.L#? Msnc TΫ8u#!MR]0]AIb•"xE"$ 9\2qD$1H#Ș)KɚVLZ_K%kn_O$ʰ j o> CIºvw#ԿbarݎTɦT}U&5 F܌M~->kXg`$ZL3g&տo~ ̛γX:.XWBnt|3 qnncEh!R[$}}~;08fAd a98h-w&1whK:¨I%:vJQ$2jw"+ʊrZ?yP>S^doW%5rD|2!?pP8h8wĖ?Tmܚwq@{ @(!᯿~jg-o43OEjށ DE;(5΀@y#qF1K*=No6$3+ddƆM= b'RTeFjXpfa.-Fj:>#ե-r4p aȍ8lɈ͕9A(mo@vSIyn/q2|@xs}7č?saJQ{y`kaa>}?/M4 _yϓQߝ-h[L vC_n>ʗ [/"$"H:y4/Wȃ2+&\^`\.[W%GsZaծ2Dc+ҫW*l[z=mn "d>z~Fj`󷽊jM[ |@BRHf oNVu=fS#%tWc"C;N yÁ5떎dn:0ȸ=gǛRH;J޾ .ir ]QI!+^9 ,_'5~VJ옂H{*Bn {*+X瘗<숢1U;kN&p}Ab󦘤ʚeFM{2O*"1.^iZw:Da8YA L-uJUoA=UGZbmDMk!TMFj9( 9b{n٤-_m X-fC0gZ+ l(֠h4Xy6GRwǿa( Ft P ٜTξ,2%ڳ-LR4sY-/ʷuŃ K#!Ⱥ#\߆f-;~ݱ/hq=? ߇V y'}Yɿ&X"d~a+@)ZsÌQǑ#cDl5E[6YD&j&2enjS@WJy风[iY^y kcζTS*7iFa-ֽwo:"6g6޷ kɝD\^B?mP>׺-bT=cE;5O 0ICҙu*h?ָeIH~C< _%+%g--tˈTewSZ7.ǢdQݶ8 z88(9ߎ8#g]ε6 3Z;6T͊IA^ ҅i݄nhƩ%ieϬ8zρS;&KAB0?dkI4>+H+{ \,ҭ]dphŒҁ$v,HIZmRWa%!ԊF:[\p&{!˧6vɺ fRcd\+OJ),Ot;<9TҟnXleΥz9Ê\-< jl tV'{يɉD7bɦmÉ%r[QTBRμT7.qpeC'|Y;6%8^{&0iIƁuP3pJu`ÃNY0Tt_2^x(Xs@(~TfDt30(WZE>U_ )@ Rh,Rߝ8a]'NN̛W Ho( k7PU麟 ŕ'wc8rS{=Qyfő1-pTʼn]@ qGv JV-CWQIn<$ZB2G $y=u6XX+v.u{,UW_pW"0Nª 1> 4TOH )c⶙veO$r,6_OeiKՄ3iUh_0xm2q@ N&iWfy/NџSApN)Pҹ?HV2Y:ǘʹE1$ N $` Uw lp©_@Bu !PlH==KWӯI鰠ι1z}Ξm|Pbadݪ"k܄݊ύa&Fi)0|pmKq˭~GY{iW"K',h Q)ꡕx}_s6_lm[j_ W Hy,!}>239,Yp$,eNfi>T;CT j|H#+LvgI.n$k2/$jɒE) CJ8 @E/I,=UTurxw92Ց(2_s6"%P8<0UyE͑ Ne z,}X8" S1].<Η[<j𱔠1@+etj̬zK3ܟT6KII-Ԭ*3䭝=0T_TN)IÈphڼPL9yw)0uFOVPpGpd~9_N 펑Vn3R^2 MdI0/Wl_ޒ%>$"MG/diVU} _ gyN|,`zq0:4H1"DƈkP 5A `R~_ K|{\QSS:}iX->*@c8fӄbRAT}ͳI|5 q^ )P&P1"!x6u^TfF+]=V?N*1pYY}j]nXKEwG'9kOi[G+3[)AZePN \[j]0_V ]rъn)<ǩfRbG {`-h,x_XR鎙 4 Z47Se+ BbhnyZ+r6yi+5u02cJ"2p !ujÜzluB| MKPLKՎ;ivAdOVUo9PI]rtb P(gZdk{r;K0an%fŬF``pM4&@-ȷ|c+RYyFߡ0XShyJ;ٻ[-\tm% !r&]0.SB5 GO0L<TOBsm^/i䄷qlqѲ~-bLN0@mW0<(Q䲴== , l؀ mFM4vkbe+[PξI8t?).)n ǙrNΤs}|O-0!+U$ Oϴj0$!ESw[3$8U׊E DyX ܧ N4IL$2 4HJ13|nN`-.֜9@,hyX=[)/8+3;Ln0 E|?%v|ïw4SX? &|0Flh e+~ZK4/}C䑃F>IΙbL`gȦ{t+^?Qƿ&˹ mq9.'}hm mr#,D&{i2"JW0gTcKr+IReIf&ZF :13펏n'>$N)`:ܭΞaSX4GAy qsXТ^GASI՗.y bJWdn{W xՎti7@.?ܻ0o#݊)d6MŒLczb>`źY-CS/X?)YSP#%SLqBXQ|3OI!hZ\Fފhsɯv1cCP"Ί+s>s$$-ryUFhG'jGbGSyRb$~δgfƔ sa0e2`V0T ) &BQ<#T_U†޽hFNG60G!cfauL(Q>[d_r}MEdz"o0c/:T-HD M BC ?w@X8a֢CR^]7ċCd"kZD)\ $ t`QMȄ6@*H"p^kmjm0ov,ϸ)r(WֺaF*b].QE% 5w]\wc"qU%У%(6W7)<6ڑI-% E*I'$BZ)ʘ&#Tx5 P }(.![R#nVV^qyO6;-LŲ0"4ƑA2iDaja;hXh)@%u)VqAE!6Y'.}d{6<* ƤC>4%$ȟ6eSd2P<ʆ&P6A!BH$cI0Q,nlybI-Ud}`J$ &8yke!hEB.9qWWjcFaYn^}dveC#R);45"r*,sotG#&`5 K XCvkbŵCdoӴ1>7T!BOpa0aVdaSO^HV`Do%jD'blQHd,RSP5Z+#m$<΋Gfz2I<ţcs 2oVyZG6Osgr NoG۔:;=~ʲwzzP^nxF0p(XLAti< T ! }|^DbKUX4H˜)u 66iģ(!!Z(a@ii]:ҝ(@_(,b1<"x d6MGxƫG"鿄" v\FR Z)Sg=G|j|EhdQG03*Gw|4t<~) oaw[F8M S; *TzaT-Sq,9%=Gz)! ➒Dp;H cgR"`@4-4EfW! !^2 cj>,LCPgSᷗ*O HikJGfBƛxM@E#|jql>A{ _ o;=uVvm[?L~y}E1CHH!%ٺ(զ?;11Xd ]6˹k@C 5*q,^,=[%uYP3W>b PULj=1.1/RX~)_9Äҗ/3c::d)9\&Ug#~yˢV85!缌h{ˡќhK2ޟbuoN/ cO>Rs_Rx5gUx+jS'2zyB@EJw3fGy\p7)cy휲ޒTiyA~QO,w9P+cHG NdKpݽTʨjFJmo4ϦY=5p~Iյk& y  R՞ TB^a邩$v)$UnK I7~&rL=&gl$>1aWc%bC?*`HQQVlJZE$)#V 7=ZJP,F) ) t0&q8}YLjLa$-Yr-6 p@$vVDmBbR>4{r[mx|YlfMǃia_ Ҧa쥇`oit0Y 9asua!3;Ln0Rpj|?% f%/ ˦yBӂ)j(6uȨ`?%|>li%M $9g3:~,; $M j"Caʌnq/;>0CޠCyaJ'7̭R$ %#CL*98&ru<Ў5ܩZܼ҈ Cm^ ş]7bqlBF[5uѥ0Bְ~o)˖`;mvwCWD hQ`\3+4z)͟(ǣ|UFZ(cMg,Gd9 BްIg Wt%*#)eyioI WrZ? ]Vԟ9m/Hckf ź$c&;"np$!2gVڻ-Sz\0v>-d<=? b<{u"KW"g#2d1{|D<=YqД*"0]_ƍaNHǂ &:PvB% *i"X..c݆yGdJk|^ X*D9Ƹ큑• LD2J\"!Qp<qjAE>8A 4 n JcJU"7 2Q"nfLC7Vylj R@B Q"[(IbQ6~P8 /Hm{*}nѳḭKX_#n xG#.aZ$ sAo@Ïy暇&P@/iK2l:$J$30 p!c vMGh}d5KnGI,٧UbiGpݒGY5i+|~PqNs>[ԢY=[;r p\E0r)1eFxHCD+Xic&Ŏg3>=E#6Oom v_]FBu@yK0E:[YdF`Bb0m!~BGVݽvPv#-jP3_ 8+m5UnKH"%D$D! : "( $,@|,^;evb!b(B[q>RN%岛MbM3Ҫ7~0*#mL`":,҄ [?{8 cO7UD`#G;([|d5,@ ă$Ȏ9ؚJ>TV>L#X$AZ9f:^ 7nFNygSG,Ae]rDp!&FQ5.rWac}$~d6$K<#bElHƂ6hRЬ"_-/R-^_r8bdtTZ=dL"UPP,;(O7Im b9m/ # OeGaGcl7YcU[2%-<Ct~D[ku׭d3ސҦ,Tc)h,T NR0M34_NqIUpy| #r]q鵐s^oӳQv&&DcNR,2m$Xl1pk #<{DSՇik1G2p:aԕx?o8gy,M:E^@s{lonE<|^ upRAdLDhIqV= q9){kgg#cQz v=TGcv6@&VCJ r/b*qm4FbbFGސB챑 đؿH&>-Q:)Zþ42.AmG3#i[,g&bIXYuSPZp_By7p&CK3~ LMem {ͷ#,<l+GtR[W;b^ZC3X"itڝP9N,w,0u~rVQ}))KoBhY:ԹYdB ʙ,SoRyLQ4zւӬC5]FO9!T8Cm[BLpĆ4ԃkP&}{1&v(-4nꋐYrr02Q᪸`5٤lOe쪷-\rw0i{-!2qWh+gKdv3gqTpQ %2Y`i D \gwҠXR3?*X:H~1\M U@{.d-_R-J|aCKK EӤY8]z{ӰPXg6v^l/_hrP b%2̱e>['_ UWԄ"X<Մ^E@q )YC9=s(2.Tj O95ӵREQV̋ !&=p[@;;6|m6}@S{9 ީ%qBT"XX҉6[nHBhq{"xյnЯPX ]L܆5s*mBR^B]ke".TDF9Ȕ1q}"tI\!,,~BJuV,1Z4l.SO6#STt=#TolaQ:cō~jC{xEmxm,gA@Q_ JqIZG;7X6ß"B%2 k܊R\4UQzH' 5pdc.ʭEVo醽2ςO ڍ.s%2z4` dX2GGm4 M}(TYʕ~ڟ%Ŷ4dd!̺O ׃'e&3A2*IdxB0"0pj4׬Hg4fX9b|J~=05LvCn{z`ԶU[rpQC!߁%NϷ e,h֝X ?1󈐞e$Q-IxRPqudUw%)a>} Qy$dZ53#>>D.è&$>k5vӰ&ڬEdk5Ţ<ڣ)j 'TF3˼90CӄMLKd$N!:IWҳT_stfuNTԄ[?<OUua*tRPmDh.S`tMQ|>wϼm-ΝU n}9vEP?Ns嚙TOB1IwD E. D{4EqMR12Y"fO.2YJbWmD`1"b2 )r$:(~?Y ',_:\vv `'jxm?it _M\ 3 oXeANQSݘƙYR4~ ŧ~Fy4IaUܢ^">KhߜMGC?b%TZ*O$XΣݨX]^Fd6 P w=9@ @mgkH-Qg?&_oo? OǃiO?Y|Yhɚv\\^:|D <)z30UP&"';-=IGKÈ= {B?E-mq"x㽸p k hL s'ZnT{yuo-:Ϋ0'$ZFzd'NˆQ{9 Akqnj/gz04ރCPƾ$ !fr%8jkV{bWؘ5A? ./Sapx =8рSW'_~RGЃ?R3~Loq*U {{ _4oXto?J_s?ho+6{6k A؝mr j-Sx~i>7ו Y4v N}ux9U`i0oAܯ͢UBc6あj.#pAQ۴~*Ngz"7] 8^&{$x].s+Z5++kp碽nj1x4l95bpMԂGy`>Zb3( x۶Ѵv~5O'?rdKUI5񖯂Wy3>lآS+G2tڦwrvbK;}G&mՅ}̮MԅW oKCII];tWf{2 (T{.紃uK/C-"Lk:3/dohKag0u2uU*&(6!5o{`ˡR^%F]ohʺW{QK$ #V`8 TX ;IQ:=כp3ږQPGcɱ* 0&I[䍛h!0?&Dݩ- n9?+#h{bv~"TVek'[Τ-Oa8 ɜLF}'2 GA}2>5 ?RhKY x ^f&csatBln#o b`|UJC7G|"@1\F?MNn [7/mv+owߖ9ַg0f78Pj{fb?9$eFd6Q,D)N !pBhs ~NT&ykB_CiT#+i|RJNd"pmcz'"F'b%lʺDl<˻zg(t~a(ݺ|ȧj:8bU!0#o|&eNX2bsB!i<#%2)8WٕH<FVsTv-kkn&ϱ⫮@,&1l$4P;{nfd,Er+]#ٖQYNg&1Bd€l^5OT)A/Fi{/`qKN9S% v-+bi:#c$Ϸ`T" pHr<*F)͚VqRut!7|>=6*W".|Һ\ 'kbMxIƓdpbnԹгHjVEQnmW:#?2 Z\P-HTk..j*eRi}gO'e&儇'ɨ$ QHWJC?j#cFh4``}ii3^ȫz_k[-w2mkJd_HO[s=q&"ZJxݍ %IfS[nhŕ{wKdi2 #0kdxWo/&2 5\*,sD>J2'%2vLB$KrtN㉋~4)]9sĺZkYW&pZ#&Y>?դI,,,{t`Ӏ;z}~ sH:v,lm@)o.*E/T!biQ#("ob1%Jmx3fl@")i:*U dRHE6vhk8eĴz՚hSCM5a Ǹ>Q++:+z_&t3,F90 ]Aa *MbOe@:V6}F&E\&=gK[ wB#vLdj*z\' 﫸ɞ(n* 44qSTA9E .f|҃BEZԌi͘'r':QĬJ5SU$iX e#vH;Fh 0֘OH]`W8cQf47ȺslD6 p+f]SO6ח!ZhRAgGp`Nk8FdX}s ^PD_6cSg#u)WEe^grZY) }fd }m8D̪b8FɊ]mN [kW Gj0=l%eqA2ӕ*Ɨaz3,\@pu#SP7ME*"W^)TE-4`(S*͡Yl37%j9sTwV<}1-:_]\W9^uĞ,!+RANWAͭAYYꢬޚ*a"  sǹb3bu`WSZ#ʔœ Y_Z`^ڗvV$;%ܡ3'SRˎ+^"IU8귤:?]fY.)*$xDT: ~ 0)SAJg9΃6,' DXF35c7WhM3މM $9?f 47v\S4/ΰ5Ofȳїؾ]un@ %u |V )G'SWQ4ja4΀兾P:dN+I70{g_󹕕íq\+O/6ǐ1ݳ/\`!V ֖Hl4slŪ䳎n},uOWnv杅Jgjm+>jD-F7ܵW8fotTw.OV}\M ܙ|׳:9LGI u+$a 8".ĵg(Ur IIT$wǿ9emF:2r뻠sf]g!ѠF!KWbZmO:fv|Y-e?s?ၔ{rNCw6 9sTm}Fb(}5zO}w~31 Ǐ+FaeEW:-\QҕՔYA=h zW SeŷSo =\~wY#Z*hɰY. C\] %+%'کyҔ'4p& 5'7"=mKy6h%=O"e[*6yS1|oP@,rmn`cӶ`q8#nQwc)/n|]A ip觾dAy{K %ՀςӭjB`OD NX빀& &๷̛Y\Ӣ,̀=3zkL約}joW ^aZK\2Sfq4pAYۧ@pyB只 &.9*i1Fs[/XSO`Ѓai]# ֞ү@K@` rLA1ĵilw&gi٤\Cg-Vn'܀/p-AصcPSLpWqSŽqh,E'gQu(IY?J&(n1;© YHP͢WdEm)a5Oe4|.Z[/*t2TQef- /7>NEGP8` d) 7~ EPri_vRMȁ'rΣS!qأ0V{ 7ϡ^Ω?'rEro~Za|R{gjRrF0We] 9>z9RX("f@p:Eʓ^IUtQʙHÝKoe94gmW o#/ן<L]!!6>6z P+.SIp'dz')mlLlzk|@yJ ֋Pz TA"'x)tA/v=Ga Ykb=4֣!'{0.{?r E)L di4( 00&0J49)nRGIa~;e6Fvi.0nd{%L!˅JPߜ;"4t5X7X2;R L0?,FH\XAx s*h#"?т:mr7 "G |'%cYfNdz'hp_L ܁ cϿ݄x0čLdxo6CVXQO/c/Qآ(@_\#|Xo 2*Y0`GFBhE,Nx\Ui>?>>x1dIlJi16AS|+aFh,oL|wMwM>k.?`.8zể dNcY,>/`-£}{2+YH EB?i''doO>`c%-"F u(- hZ`-NPZ1bbR>Ƿ7%8Sb8LuWЭ&riAa8u,J5pPm4^q-IijaYfPٔ= р5/ήxp& EAiD95nS}_(C̜nQq|dU8|<MU ,(I!\*סS%]xOg7W_J,'HO0 OL]A'p$ ϱU,J" 2]L4F@St\]#(3oecWcI'x($}OI| _̕Vk 8#SELD\ΛY2p{Z;uRTp=cʄۘ^5I (Ps.(%ל"p`1j@[jIpdŌ}*PD)=īɓCu9vр}F ҵH8y툛^^H' Y@Z**#*vdeݜXjC7b!0-LNx+WN`+$[؝v׍Α#51LF=Pj:SZ#a5VƁm:džسڬ04`LsHÔ7}T y/WCLC}Zf9"]ٹu2I)IBpǬA"XjiTXe MH\-fnbnE*VNw{ly&[ā9{yLյF)gL^36rO8WafCI;i*!J!oӑܖ4F@]mpFT4MPu7Հ4llW(_|' ^[A2]G:aa#8pdb&#dx> (ͱHHg-{S=vp3&Ҋ_QsءRy٫p)5?J3w-MZ`^?^3+ūs zOU'{uqx~/PRYa އqv&Vʧmp""-W3Uiؗ4TK*% R*V*uBI4X1 ((HDRӆr)7X vSEj4U)ŀyy G'1 !8c- @J;}Tu=PkpqܰFD:#{JfVH%<4ԱMFЧ["EK>%`dp_ ղ-ˊ4&Yh$fFrzFK/yZSdSddڌ~x'Zku;ሷ; [5!Sn3-ѿ~<%4̯^:F3L>yxv< 2a/l-d?ܩr <oB)2q}wAp6c'qމ7 PX&TW/}c,,@ʲ?R@!c=0`#D5]}t̛ HX@E/\zQd6U1\e^(Dݟ~}!l~ҺcĆvHq29|lRZ- e%@BG(;m)e&c\_Y8ܻ(v>9$dr{?jx_~m"CPFnaRrgE"RT -c܄,J?32ev~!Cs=1GHJ(Γom|NN ;Ֆj}}ykC_yg'/b}ģY=!R@9SlۿN1wX-g1ƋM}L~5og vHt.NtB5Zhjrna/X851>0Ľ|aF]璜 j_UؑJD8td%FKHuwq%v^8l Cc3yfT3dȵY3GH ϳMzVȁ*u$bQEZxY-X#1NRJ +rڿ{ǎP:v_QW|~G~ر =քLPbZ41+;c\Ic7|tijNV<n]&Z3rOq_N5gNi`@׌0rG$Jcyz d2YS'(:S,=+z Uo۽{ayӻy{,O*|ue"q" #=y5A9]}۫9NWsˌ\Zs3'~iN7sz漚gzƿ&'մ+t)oѨIT)8 bR3??oKz>GEQPA-y'=#Ķ91z|Ul[tV<`bۯm@oj xHW{jŬ|o_jHw iM_y w/_s~w0c@y_Z`a'I,`5::dUDL(^Wg-!Q%ѫ@޿G)+U͛ZMa?sUq|w%lME*qiT;OƂ6Cd}IXb j ѲKEFe4:&_Q0l)ËƖ[L"2A.yFC%,G&YQeSd+bIQ$ /|3QؒŷFwJ2wgoKGεfB`xgb> A˲1B$K bYcsVvXEAŌtA42^ЗpF TT;2UԱG^؜.2*JYPU5}2at跸,\oWQ&6iw|vĔNTӉBٖA`%4V^#rd%QAUU1pe(08f{`nޣ6We^Vke^Vy¨ ʡ1_3tDliqj @'0N!+Z`ylk)J7FȶÀ9!.AhЦfN=V9Ⱥ?crݻbyӓv*X.85Lk-R6$ UY6C){mŤa<0uFUʆ,(^.1R6W=]ZfO2{~fXŒՁ} IŨ%l@A$܊FmS#b!52"N8:FcThZuF;յ:?Kȟyz9䋎 ˪dIt,% 3Y9tUЊ@Б> ׃tl Q%)rI@>'5Ek+XYSj?naP`rB,7d׍iQD,@z1^BPu8Ck10>콙p$,kW/{Atė{o1ȒOZ & X[|#JvA'cQ q@Z䮴c(d1j_[<\p?)6&ne?OkG[QAQdx^?Pc鋳vr.Wjk_g_f{?ٛfgZV^?,YAӒfҮeKr!bW> 87 D,֟S<_l0~fPW+i[RZ|/ˉo8iÏ| &*!x<'"dn2m6S+ߠ]!$#dE "dHfl,52c)BfG8?QBL+漠đ[l<'jN"eTNPyЯڱX0zU+rݪj{nj!;V֪2*kD&jF@=sjׂg{5Ntkj;[W3ݪ5ʼn=jGhm5'tWyd[eثMn~@=N ՗Qԧ鏣yHP4OqEp/~sƥ΃^x_?{=s_]4~IDtvwP͎bɉ=Q`WwNޟ8;Tom‘ K7zKwFr[p4㗼F)B pNn>q"X򒚳8XrZsnL&hdKT*\d9b"?}XsyDwi"E Eծt6߱McEӂ)^dljt@Wj-x;q1FcRɈ?󳪟9Һyd\ⲉ,Uv鱱;_ȗ<"@r؇l+eN\_Vx -;iED˕,)jm-$`:2nv Vţ4$ܤ,VP]9)jmE GV2"y.(5[a?#=vܑz #t/&rg1Ep8"r;=9qoCՎ8_.T/z"=2rF $ӂ5' 6_DW@YDb2A5bT{Q(dKH,wF|dP'CPՖ ).UnHyT UJI9I$x.:XOWUg 7ZTzaf q*d*R$?URb*nژvEˬ)r~kLмYW1TlBG1q,>`6%ԺB\9'Z0Z(]̼ų ? /6D;5tBV$R׾FaWsIE_}I\a̐RqBRYf<Y>l'O ;L.CHhևsntZ8h0GZspEˈ͈HKGu2HJs͢q6E~EC<* 0A,ekMD#z+6prL U$c)EX8 F1x f)ShovTdTiBbNh')!8gW[ct>cbDUKǜTkhhU؄lh=O5I@EH)H_*/\2 Vvh҂%Y^&kGa ޵uuPqokq8)Cc{l8k؃HѬEGQ$E<4æ;4l~Z˟<2(XϪ2AX9xvȻ*RYVWQ/'';s)Z;z ܎V.D."D_uYo;PW6q'_Mv0H MFT*m!x]ЇMæFG1e^yˌ&$wbykT|FrgΒFHVX7yT3ꫯTҩڵq7.^M:5^ǚܿ~_0γp^nWI/wL.0}O\EUn/Ԟ ى.YCɖ ci.>pS?L2uhΪnm9cB|ˤo~anT(o]f!oFt8zĶ1k9CL?:p qciCA}5AvNkrpT{Sknr91# =b]L{D~ Wb}(j)`":")tE "(Cɋ hozFoVZKVcjq/t1‘?H8]8p@ ؜'`K2‘pF!İ?c=.`N \;f6Ns"?{Ƹ BD A8CCNӧϼ=0y6 >mqYUm@5K5z<~Dm`AX|7'-"q}`̇t.?9mkhx>(}3O1<&(5~Έcd|?W|9V _&oOtk=CER p'}>}z>ڿ;"d 05(GRON/7O>Т=[ (bE%Y9~QO{tg͉oSzM{uCش&k-}, W.;A"ㆤB1;^s!nJt_& PTvIXiLQ}NCLA' cUep8].Ap&QgzJo _q+ībryJR\cgh*|3k-Ai H<wG> *čֵ%C.f&TJ 3ұ8k2gKЈKX7ҎٛHp.u`X!_O߽ePiAI)rkQ^U:d:n?(un=J^.hQUzԛCbk_nfo4ҳ7Jw(b!PsV$1x!>D77v})ڻ;H]y2RHt8:ƨ,`gNy%פ$CVPK;n={ϪK'ow/]]6.qd?xcҘphq{吧Rէ(6Ѳ\ oқqf\(UgfzlW2 ;KVC'Rmp7u6}+م 7(JY~,zWnө1فvdtYQH&#GM5ԕ7\SRv1E>}~G"m~j p18s-U?doʞ*0M?:rSkEajS k5"Ƈ6.'<gRA I:!-A?4̈́p Ɨ !@/(w{l;@N5̕ s&W"ʢ^KrJ':c!"JMVJKN-Js /t $^ouE䈸G"uGGP`*(-8Z\\"躵J.*v55h9X^Hb kEljdC13q*dU_1bR*7va-2">da])l)(2^#벽(lF9`3Qج^"[=X4@Z*R)>ƣ4&Wo3$|',SǑ=ҮKMi)Soa,ndQeTgS. Y[^g5mَ` Z#z&%d%64Movh̶yS j% fY̓% H6Z(e?'Av|LֺGCjޣqz^ϯ jjL -&/Z6-Z7[6_{yWz;2-9:wfuf\Mg &^LFPA)B[t\X8tŁTzXjkRHyKlRJ^6g=ޠ4S,Du]`yU19C>NqѬ9Yu`}uPW.MGSƜcR:pƶIYdA+ݻZ;Gq=.ٚ7\ MڔuV5Iњ)x‹bH ugWhT@63N16^MU \ 8z34is?&Zrim)4^Hڐe$I]3ᚠ\kưW}0l.*,J9QebQe3v1rR-ͻb1]&2B?JtIUdQQ:LlEJALʕ\]ΝKKdZFE?҂R ڞGڳS_oUV2sS jF9w(5M4R=:{T.4N$.dUTK&U vQH3նpVEJ9ozCLeq4o^d1ʺZd( ~}|aH"S2@@4l7/Rˆ=BYYTDa/`-h6|ɱaeCU%hOd*=L/RCf]+@jWFSn+ql&#EZTCN8g c5 5P]*X7:$J)5;|s6~ʡ)kxb%ڂ官zh <eZ%*ꡔW>Zl!\+ibD+Bx;kK+(纄[sΙ$ʎ+LiA ƴUƹ%{K^^ȅ9Q dQBll)oOO^^CXbUwūQ])V~,T$̞?^PcAI8/;[U %Ij63zk+Vo9YU^%Sx%e L> RίsّQ'#oT #midML-ʩ ӈuWRSRAI1 VԿwjf"푭mzw}hTWsC7_&,)T{Ǧly$P ` R6] 2`Q)w DS 5thM]p2%aKZʵ pJ T+ V= tdR-)$=㟢O< mq^??J68R9&ᛋ>y'73{QN ҫas){\٣.95I!qƆ+FV?ԯqoT/̊+ h {ezl߈Xm6f)M!f7Uk%,JP!N*"ZxHzzjy͎_$Sb ~hC4?:rsu=8=7rȓU= 7esn 7\3؇ˌН,mcNFB!Of㬱x2?{WJЯǑDR$,08bv[ݎ;g3KZ:Tq}9qɧ(>"u<@9`fP0&)Mz7 f7ye-1d3#Wi,tjen96fh<7 seRy#(i Sq<19k_xۣSsys=A&䣌.y܅6jh>1B='.+!NÈ5g4UΉO.Z?Ff?p)~-aqM8)ehjȮGkLbHeȥa%B1`g]p{3$=cF6Nwbi,mz52,h{ӠEk g)>jQ0Fy[1Z`$YɔjLjڳN!6+_c֓m!*=[ֳXؓm bŎFx]hhksF2Zeαm:W[EȾVh zlz;r%V\gon2:uAt"pn9o0A>9չ?xF ,IO$ ޖ#[W7u8&(Ki$O7]GH/@;[@fޓ j[,nv ܡ]|ټq7 [vӧcp {"Ю_]3&"/\Ue޽퟾_?^]syEs}c뿿ˡ^M< wH'W՛3nɌyxzQw1"{x>*|9oo<,֧m9J?_HM*x9(,| /x; @^:/|>/@;[q1˶6yePh7eE]3vD5A;f]Byx\1} ['Cy\¬F.ĥ!+ÒT\RQ@8q&G6B !D,hGM a10Um ™ xX|f&aJ}V_dBrHUך~wؐ"'R-eEc[?4gIs:]!^2!0ZL\դ;zlZk썓ZF[ LՎ_ꗏg!2)X?w1ᤙK(CT>h@8 qJU{pN[D[&}hWwje]V$ XզyD݋B+hj$SZ߮t?_IϻoE2N܉MMW?9|߬z_/7?t?wpho2{ ^v:ёc/Ό`B~2=_ YpQU0Ģ%;8c*2"ŧ$;7Z(%s%1$1iWxlȠ窽2>|&,w1}{>v: ~^t "-e]y$>AmވЍ-VO$g4LJ0egԖY\o-}Ye)0m2!Simlg`4l68) NAX[Lʳؽ[G&{tM;c%b!yqى=,9ׂSc&f(J']?&|_3w,v|mV(kc'(g>7ᒴ PQF*uheZY{[=;E7`!GSAP}xeY/u0---"Dq1ggņ㥂!RjbUԄX7o^9YB"1l<7|+6%MM;ӚLvPTЩɐ$dßdAތT3\HH}ͭ ')F8ÃT*ILAr'm-(ƉҮbrKi%rھ~]c.0d{)q 1 6UI:C~L GnωD'2WS5m?y\Oꋠ=p0a7@kv: |Fsfř qtKiXJIQ"?RۛI?Ǫzn΍Y9WPiUO4_6<ϗMx%1\o6a\=%4xżߺ`vf~#(`dGG܏Fac6]P|H3{~/2nX0ѐv&c3[9:EwQsE:C==m˻]jg AC1fɿ|wiU~Hy:sW:D#70aS\KBt)ơW! a)04癆$ѻҐ U{Xs|KҒtl1su+7~fXEˊ[0`` eoQ9*\jrFJu2( ye$5>cV.-Q<]>3vN"eXbҠaXgqОlj@ȏd$|R(i_4jDFS>Cs:r\\ G~=r&oH4|kzrԤï l;9ݺbN|7^SUу=ݺ3Uc'8G1UWŷM <ܧ~#^86?mFuj~Fިi/RjgY$F ahORNe9Z}MG}Gm^okĻe y9>AH" G$=V󙫂ܬС@ GPJ[lQ){W4#pJ] y!*e3Q }R)5&*Ե 'T7*mPR;Ni\9ږ&2`W V{=E BAp8 {kЍ'#3L34(2.lS!н}1o{O=.'BDhӧ@pd<6jVxf\4@0b6qX~_loz#ȉg3qvm*߽Xܮb7C^ٌPy j/ !bz*{J{S]y9Jo6 DEJv#gyg(;{p&O&!FKʾqc:VMIx?az&2dw7I H<'&oy2x&җK,zI!fo E"b]biK,>K6.sP~3̀ΉHS> GoLB3 -|'(6Jc|'?,GXf}ml{Яm7C^e3N{(o9L h{6kGXl:w98KRdD\47խ tΠrshwvG%fpX/=\L5VJtEӻcv*<\V<3̼6.`X1H(pYtU0q%D'r6beibcFr6L7td}]m{/a!͐mF|A^j>;VSds8#a٭"b" ]rQzTC(#QXWkANW/8\żu؁v|2_s#DTE8O 9I9JH5jĕ/Fg/^' 38/CWΜ@MzVv=yl;ُmN_to~e??&x~OYnIPBqһPyVIC%ul I|юsah6+&3l{MCQZN햏y"~̸wV8 װibwδ}SX;#u'?{WF/wvRda]p 6vcٙ,_%-%ԒcǜɛT)"YU=:'XvmV}S`hkNNXhZ)Hb|SڕHJB38'onҾcfflד5F]EĿlMDLA iMߴYeBυڑ8CRV B #{BTX23UjW]~Ӭ_:#1Z)_#G>+Xyv[Tc-nVъ5QYoˎE&@x5KS9$HlZKϷӛٛ_'t4Q!jzҸs& $βhldx&!A.٠J*%('RWE܅i9aTԚ'z:SN0Tw jU:#8Q(y29#8pTp s_f1lg:Jp ʂRtnf;1N "e06XGARjvP? (B/VH9QHMNK牬A=q:$\ DL)#Ք> n ,_ }dh!dW$ePzf DWx4& NGVL84 k#nu-٬ڶwjѻtJ1F8MkLF}t:Y_W6+Sn2K7xUEɶQ) z'78Μ_%e3Y]7V++MPVi- z~#9jQ}b\ÖԲM$/ )=g \jȞ{*FWԲJEݳ,l@k gIikڳ5*Z˜1x}{6`yqEwYST˨<-fs-&߉=-oT6RgpiiȞ{n A >V,xllBڭm!oZ:x}<(Ι1fohF 6ǭ8 6o\wME='$5(ztd=QKFi.'Pg#g;0zk?[aTaXVEp<Ԯvo~i+ sE8Xe9@߮>soAF\w 5γ6Yť} ,maKty S^fЏh}MA?7Fi\7hKu=_~T9UlXМْD`sA&ߍ+ɺ*d-鮅9@C} 3q% k)Жzd9Au+Ju&MY? Cet^9M*ج1!9{gRȄ|U|E*=[QUI4pJbDCi^-|uRrE\aOWWFqmX[bFG$=Z 1&^UTG=M3kji9q%MU,26Շ TjL7Re5'CAl5Cmojk/篩$GycM֝mQ}/UR;ZVݲVJu5g,`k!J67!N fFI c:Oq- J*۬,itvτCnE"=竬X2aU7Bp58hb8_ڠa^brS4RBG|b9XmMv(ok{DqdaQ 5녆vQu\`Αe*ټu'4d!(`@J,1 Jm鏉oZU.??~I^oq,ч)` 59dDV1&ꗁECDO.@c(l w:#wlVz7eeRXFX%)րZ 񈱥[zʋ/WSM&D۝1&JVo5Rj17Yr  }Po׿4Ԇj5Z:sHG} %; 7{#%2ꉤ4p)و,h@X$CBp*YN5:F) 47Wؐ>؆GNz(ID˾vT系zr\zD 0x-M %<;N~d#z߂re#!mq })6@H7o9HO߱@X(ԯjO7F v-=%#^ey%(ȇ7uQ拖HUw|H7:QK*ɹ6Q$CϫJ*~鐊LS:s |KDwA\j.z< ⤜-VbO~×!S!z1nuu{9J!/ f4qۯhq{ )# E;#l$#yP{3_=nn7ښ/dzWM~@./y3*N.&77NXbpd䂦gNn}[5y:ɓ5-K_X${v"3|SZ}{94);Є2fwK_O~orGkb1nOb ?L3sxU^r[ljq^V\aފ.ܶn-cӮsJ6ikl)~^o1ciiˍU50t)p&e`~=MROgə4EA4Nxy++Ώ N' }ȤZ/1ddZb.mU}eyAz)87ƥI\^R*~ Y^SJywkEV?h1Ē iA_]L_8'1NW(8c=*\1+ߢҊդC;'=W㒞QIwk5)a֚P#qXaץ *i֩ƪWFCv9B뼭"pC,3dvI&0O'4e]Y1&N'iwo7Ӊ*;SYyU.IV=jɤxEk2s0}j-yޫGEZ=υ͒8k+jN*zrے%y\kcXVmv/ڑkvXOpd+ΙΙ/~,;kbtE\uYC}߲VέPIٔפ4mЏwn˱r=] \ZrJ,Em] cJ $wnt9 9 rI1ny|>km!!؎U 0(SUeV׈u >+ J #q`P).*Sqh%ԮR^@rV!i=0첷 bYFɨVg%=wAvl0nq8<{ZvH!JQjz e.0r\u= oɗހ#;Ɩ'x$4Qa,YDs.Ka)D,٠QآE11s)C#Ɯd6[RQE!#ǘ3 .9$Y ;%Ql8T0ey&,.E}`iGꖡ~$ǘϱk:z༘z[3&!֘߳3b~Xq#g A\֘k W}\|5wԘpF䄫su5GĜY jvoD;kw9mG8P~U*=4hlz$E" w9oUO&~n7}J>׏w7zytZ_AiK}wB6R'v:%-uW;S7Ӣ\. x~ygk= *t-Hv:/{ޣ~K3VֈݡŐz͆>l{tl;u/)ܼQ3yxyAP9C * |Wn񵬞yq51n ?obνcKjyRXV f<^jL^\O-_q2˟'Z'$g)ԅPXBWl uP6m ckFUUrTݼ8=.n#F~r1?6qޣrPJ6 ]1'm>N[4{]n%`9ɓ1/sfVw`q8bOI/7nZI% }tV!0[ B_צ]:>?d{_Q4OnN'mHz !d 7"ѧE:@Rvz)px8Lu.A w_B9[@{P8JyeuѴ@ 9p|䗬eo:8jON-ܧ.c#?l4ۀ@ZK q@ζaQ 'L.7U;p?[S&^>F>e@J'Yb4 ,vI;9[lRFbc]B*uLR Ơ+,$XY9Ks*iTM{f랚Onɱ^glڜL;[]:6] ~ k-1ka!yV, H[|; 7Rq1UB p6 SHMg-Nbf,aNrZ"3R83 %2x!|#mFyx36N B.Ж"bb9s#X;q MfdYil< g1mлM%*PK"%s9iM`KϼUaW_8h>JU<_׋U6~BG5]rb&I9DHPXmaoGSv~QYb+Xy+PH(43U:d30)Оޯ w% ΫƤ8(+*;Ln=Xz B4؁Lg-7Z_ D4||ǫ =ރZ# zKNoF;4Iru> zU{= &酷7E:A?R\^ T jA_Prp6})Un^U>G 4fL07f,O~7S_7:C|C4[֏CXYvMB0\^3愥VDtYA̺_ÙPmRE5 wt{V^y6n NNDijOZOG5"r]`T04q @EŭT ,\"}PWY AIK.`c0OQf5x/10 3<>WRJvr &7yZ*w/|Q$J%v+$׏\5C%r!ʞ"ͽ,sD2$%+`B' *#{$9ѶAҬN Bo@_^dRsy2wE\g0DðgS~B cn-q5!tCy)WOK5 D K$)PQ{D"Xփ{7K0ppޕ "ټ{}Ǽ 1s}l YmYsڤ 0ͅ@1ԌI$Rހ"40Xk5ܵj|*^>q^Aء`q`[`l%w#jy80> Mq9|̼{0y O J^Lxd0cjbN|&b& f@4D [-.`99 K1 dL^ A f[cfP8T!\kV6/!qQ)B޷n<NN},ߗI^V,*^p{N.< zY]# rI3UD`]Op{#eM[sҴynsv8|tkFkn >؍"P@ "9KJYTt Rks #F`((rM=N+󜔅vcklb#JAnlmFDYn)| jPP/e`4O&!@Lb`uJ=M?,=1$vc۰{4ke*W~w' iJ^URWk~^"Oog|+qoțgW%5˼"유@B_'S|;$EyO~J\O1KN |)yԦ_._ sb7"I}"S#8'+3ш+f"j/OaeñʲdzPlѷWU6WOD`~jO{ݳP]W+€%m\( ]_.sԾG+4y݉eY (2m}z|o&P2 #eM3 ݫB(RӈiBj,se?m 4^Yb1Z}_`h)kڛ6`%u83Z,CޥCfntH.RJlۥC6 6s`_qMyQ͎KaX%7荩WNUD)~4E&K*?3>}8C.W}rv`ݾ 47- QMR#/%BM#1F`lAX,yƬ@lka9y>l%"Ҋ׃&r{GefA9E(Զ;y֌2|jݕ;or [%0X%$  ݀"S^!9DRJQ_tKm\;oVF3ݎ;/-\ uscީ`XNQ[s,Ƹ\>E3u1[# G*PT3 e2U[>^E.fD٦OY UL9ZA,!!t!S@bw].-:A֐2d߹BMңP f% ?5vw0Q<9-г,ج̀Yg#lqi/`L4«dT47s0YES7Z~ڒ}p2lljE| "IίvxRgvELsQN᭰Lka犹;r9rpslIy ;s lŐI1w`Z@6/A i6&4Sx;Ӏ*iMy`B+̺,ܭR-VuKfvvuKMpu;hG &ܷqbl)/!hXo~.7Qןܨq>nX2:] EEX0|at >xs\Oڸ?D"sѼ#xwyT g E%8lB) ۷c2@ ns\gºsug !Gu' ugWLVEwdRCc5:`GBJ T[Xٚ*RPaݒO5IEy˥z/MoWSӮXY!Y0{0C[i \}L=~ +nE2a,VDЂ=#s.Q0kv!$JiF[| NqY%^Q`P,r͉ܓ| ڍIAF[px l9]b/y]ǚrsŢA-s׍E|=`#ܘ1WѳHM(RW}scJA6G>U4#7qذ.)<ʳs\ X` #o&(%/`L(QKb yOا~|^lC>P]?p'ҙb}Xdm(ԁWR { C CYzWz$ҟ^y&N,x#_vvOGP?A4 , 0J† G i+9Q*0TXx(ĆyWf4.Nrʶ*%W!"br ipf}3)ٛ\CK8)5.9%%9w)a;%ÎQu6(1Fa2-DJH%FdZL@ <*ywGT9wPy$bX9jF(C=-(M,/" )H6M#Q0 [oQ)ͼs`/VXTC|9I#98s&K5AT`Gf4*5 "bcQhCH-tͰXpmCgŕU,.­,F|PoY\<ܑ$zm^j}?{䶭dj>%ޱkdk_35Br<")ɶ'HDh4Fc_U eq \:ZԷ/k+UF!AUh`bù p0H(S3yCZߞSCC lD|eb }_B)|4sR}Ԝ2_Bca"!H8:T::$פ D@ %5кV X":Z2DBFnj5I敜 v{(x%Kz[|IiZu_}j+Wr E@?^{|Mb "jZO\-it$:o OkPHS.``i2] L{8۳$=qI$:33O7Êkޤ{*cql@&p( LB~zI(*iHc:au.a$Yp3\==}ozuyIWw,#;se!uN>z޼5WED9gy^vJrpkqƟWkk̺X*^heCH5C~ S&D:A#c D(kE,4 bSdz[5.P |eAϰDiG1bc#hf( LpO5P$i*N\7-N:ȍK ¤%k¤J{Jq\qa!D meC xg }V}aܗ*zQy}.ք;uG+1u`w{Ş|%igBD-vXb^ZPyH@ry-f{5U#wb±2/9vXQZptNv=cz?]νeZeJ,?cjYcɥ((Y! 73(Iofp 6]Eh&7Io1 g Op)|%c@W'Zi%jWpcD!Fzgc T@b7=gMD\ZF @2,8\l PB OksLl]"JCp?ö j&7[]ݵPâZ`(SE`h>?+Id$ZA+%LٸisqoV D}n]we)ڭOz**GOw?^fii;0N^@chبŧh,ys>tPL|0ڼ1; XS[N >:~(u+S}`AN_fx%kq&>gUJv9>YΔaΈjQHxԱK|spaKK{ >c{@ĔjslrGi <բ4ۢ }=. !x#á "> zq p&fiʊK'NSV s%fif ]3R<^5A֛{Vr8\,O} -37W]VBg0?ta;՟/JhWTRjCQjKWC8JP'V#QH"EDIӿJ|= $d}HzkB](oIKx84D>Yiϰfi`B`FNtfT+qLjVq[ 6[pji?d0;d D(/bzWIp] ޙI%+=wf \~W>WةRh9R=hsquz@uQ: *-OGP+χ<} |Ex@Qkyٓw7ψ9 yynӐɃ&\=3{rt8]z-8P yJ f>ItK vNuss1]Sq\UJ%^m]^+8]v[. ;޶0VV08 WPpnPr(qp>[ ݝ-,S &[q~jlŷSU>RWr[J(ݗd zאQH *(  )U<̰0`v h!ơkdDiwmZf|cQ2{w{1%+0j32"dXr+|my,cwSqx5)Q:5"~](1;{Ĵж9lWaU޷jLDzw˗PaT"Q$ VAh7tU>TO*9C@ŝ=zB"XәUp$=F׾VqPq%9xi54NyT&ٖRΛ| _mq6Ja.kq\a4it^ Ulvϸ{4N:cz9XR% IVKFСXdqSq>{{zZ ( ;!KC{x5P^8ujBlצ\"5*Vm;jNtE(Cx(y$GI3pˋ6@%{P>bY'2vqw8y&Gq; n6v}Pq:Hss N' v9B_=@'uJ`tp`M"Vȡ$E`"^qv rG}k]%䴋x13?lE1RtS@^t$H&0N3\QAN|Ɠ>=b7^kC_l`z[hׅamk31TQ+,9׳/?g07QX`t#-=K?c6t?ћ81C^YφU/ .+گ_M_S_0Jㇴ\'Ń)&g8)_mLU8 wo L/`| #a?͛2 tbCލ_oWLOh6y TYcg,b_NM FfScHy| 1P 58LHCIB"SvAsmw[CKj0FJ,66h"ڏ[9(sI? &6툧s"'ux֛{AvC>/\ǒ2*"E_VqY漋 Ǩ<,^807q 7D2oj&< $T)dǼg/@NMUw 0BAOG?˷Tgf:$]*9ߙ[x3F'Q (ͲI_WRmd~pv`Y.Ƌ`?m}junq9sx46 a#dOQh8HJr)h ‹A8{㓧? [\.ck.ն; g>| F[74̕Jǂ+ ||&q8SJv$#PEݖ@>}0}-%R y_w#3E?og̑ ҹ#Odx۝?U*0 ۛۇy<}g͠ލ)Q[6{1ǂPz¾'〄q yg/MyJlen$tٗ|*)ٗsm >/_R"kF%%KK"eS6Qs#08-x4Uz<YZ+Lg0xvHEQ4$\_MgMnj֔' 2Om5EJV5,MʅYhY%4Åg3RDs4pK7/wh 8; ;^ bv=;Z2koĥ4}Ө_ "LW(a{KUpQW{>K헑"#$(9x:##OrZ>ʝ'1;ݜɂ^ft,+U}WV)L%e| ||>7cCg=/+(<ɍ4B,cBB⫉ }k+OwVd47[~)KH}K6׮#Qfǡ?ޟ9ZIqN+(r'K[+ɍΣċ9Eꩃ2t UF<\"o|:gsy]ޜ-3q/=9!vc8C8k[xQ)!P.$8j9]D`QhGsˍ`3{u8PںJhM %kWB_TQ7sկ /~zx5̽&Qyq3Fˌ2bK8/ACv6Ka~ Zpl(XWVXT##!T!(PIC(FƲJm_WXrs콯XFSɥ)\F@3ʴ:+ʵbosP(Y 9 ;jhgHʉ68!`ˌGVNοD), RѮB Mݶ&( jb-.T19-A$Ǵ`uv1.2/`-DDρǩZaE,7q'ߓfM4wSXfߗ2bXy/^[0@hk;Ǭ]/S4.eµW ,AY~7!*ԅIkWP11ŘۅQ;`KEBE"c$/ ,DꑼM/zsTQRmWbfƎ?E)Bp)Tykֹoo#FlO6Ofr2bkj471ܕq>G4y\H}":BolBKEmA=#QyGia 2S۴bGF#f@h&dZ:-we{p _rkF!o^?>0?Qi+`sۏE\oAG"?C _2LgyRPg]ꟹ6Wlglܚ[uRh \}zpX*4dXVzBߩTR ZEͻԪ9;U &nUbKZEc[q;DxHcAEF hogvb{j,ČEۛ@8RTnX`<*XLJ?7?V=6{c~XO'\wJAHN(.FǏl}IynN!DyT%t[w#`iK=.r!a`]FE6kًZA~P0Z;:|oւ퇸S"jŻa{^ 2bl6yw. ]VQ llV)(8a\v's4Q "Uv7 p֛7=IhxZf: 89,h{cm>1Vs5~F0j8ۋت3T@i?!_O`Grsnuܿ٣{;|H7/IդC֖k^4Sk Ze!M_,9BӾB_G?z3}}vwcv i5{WbڒϑP帇 v`46 T/zm$*/L˝{VO\h#-I/MZEUr=0L>Q"3"LewYh\~U86Kk:X:E( 6H/diO9X'c qȏzw.(m:2~+"ޚDn$ -0G'\{GQVm'{\ D2qZrdQam(6Y> u8iDƺ[%iC$Ҋ(# ə%eW]I 89tt}gΜS%;CL !> '".䀐8($Qvm~n[kjp휱Eh{E$(A-RvqNw8CV#s 90vf>ǩFHw%.UI3U9`̵ INUxĝ2AsŎAR|f 4cLV@dvIoؽD 8bϢ:L۝Ta yvzsl!erI'-~^:-?kd^EFNw2X+['@Om}MֈZ Im!hb%F`fIo:)6n"ȓq77{-zOX"d/\Ǜz:ٷnUW|"j;d82߼LhU"uj恠rwg`/(H!Zk]؅]][zs` vB-Jv6(ި-ig+KulU'>N\尣"SoF,Rso%==RmֈDKUDYHAe!5\PP C^c%(06*jS(uKCS`=Q " ͬF Ry*,&N`k5ap0z׽df&hAT^,Yͷoꖾ%aϞA=_[ oG ì%„<$LxB~- .ÄG0a:Yb\]5,Mƻiמ=+.1PD5LH[е; T$И" ZVr^bgy7 ]gE@ c|4/ӻPZ-A7[0oWhP Fkt?fc?dc8(C8(%ʋ5ˊ"WC[s/yQHRP dP, Y5oK\+ [t5X o!!~M\G7VRlƸ* pEp^]{Ku7_0_n+Y^,&ƛL|\f)r|{'ēp.J~zeD[*O =rZ{AaNw9X`jB,[+SIGI]ӄz-טK 1|E2a~p`Q a+i)G),:B[x ]DROP"_~|MSJ>ɩKv#&螔-gxt%#L1"4|H)`2,:GTZHh8P Eoġ(Lxi>WoȊ;I(%x[CJqmTxBT:v0aVK{U՟_U\_tVgݡ\QMl>V|zR38 =ѩz_d[oոǷx6b>^fpS28b< gx/|vdzɶ,% ~O3XѠX h'SM0BQN]嘭5y>zˏ1"U-7)nO2Y9-sֹ<ŗPƵ(6exD5 Y 6-?dr$ҐS2 魇cl6yw{oej9-aVEXuKHɹfj`w?2J%@D-(L,tUMAU85gU3CTV?>ѫʿ}UGug=".tcyIzhQ6t{:xf=eϚ#E.e晜b{d.D]9%ItIOFƴO1.E:M$* 3 2S" .\l.^DYm cӋN9\uDsD|J]Q3g([.U% m#GJ^Wo >!WoˣdF`p'ti_kD$lz㠠# 5"d%c$pu'@1A}*.I`C 鋭fn@DOH-|Q)!PP@]H'<'E-7JWXE3{/ nT4ba ޞ ;G.&Cгͦө%G%;3WZpP~9 kJ֪6/.4F 5"?;IT"EG k[*X؊& r&2K`غ}$;|1\x--6i8(8T[d֪oH;Sm!l9xwReW" b*2|qf_K`XݎyNw92=ioǒE2Av+xAާAR`V8AZ 2=똗";Sc뢄>[V7G&_! VkfK[ѯńNp ݇Э>[my$ߴqh TͫClq.}X݄5]ߕWCv8gVs!"{'cW;] qSx>8$CV[huĐO$=cr^.] .E֌X4rFXII{/ 23zQ!~[h=i17lrV iS&eLu2 #Ie VGNr ey2>|'7*tE:G@dV8=sύ@I醙kt+hRL(i %LI C,ChvŸ+ǟ2m(D{$T93+&huy5#C\! k cm0s[:wg=Ո#5"*^u0\t?#54S[%Me:2YyB#y5;]rw8m NENyE8 '@`;B`As9o"!rX+-.,Խ.<#o, %Y,IGJ,(XT` b{XdfR/ibFPqC6R_\pE$g['kL`j'B꫔;C8h9=`Vf8O4<}q`$}N$}i$#/mC:%*(DSy}er1)b9C:COgǴ7^He#1oZso@geo7I$P `,XQAj VI&QhcFT~>?O%}oܬcwbtEә ,w+!4= =`U"EZ*^gjgv|nvC^__;d7|O>*EGyY&2F^ 3B&x5oXZfca"M֌eFZw y9r2mZ );-0us+ k&$H\_h#F[XbAPg8ƶ?*=K)SHՀundZnޮOdbH x~bZjC:CYݤ\{%Nӏ?uk\0QLT^=}Ķ{7oƽÇڸR'7pC-,ߵ ٩ؾZ(ƹܒѯ O$({.'r:|A2vasyscD6@σzֶ?ώ:`lsѾ >dT601gR9I%Tׁl!8,Nm&|1 u-MN(@v>Dvph[Ej |5 1 CchM"Ңx^ I̢2䓴WR4^]mg\eGU}/o^-CnY44[SSLo㦁 } /^7o a+ÁkZO./wae0jf"׮2I !UTj }7ܳa⍼M)7W@#q_Eȋ${MZPtakngL̦U!qSJ^ǹ(P݆ ]v;cA4,OU!qS9sDpbS>ecvlR"[|onΓvDq6"Wwwˑr37ty[^Nk7Kw^/,0ՙ+69`3W <2y+Z!X(Lw⬥i!Ќ|ٚ.Y6֥]͇UZ]wm};o?i#/!|g^_TPg<LWw՛WFfWVל@b&f7RY `qWpe2LJMkro ޕhh}i[<7>#)ZQ%Ԇ JKJ`*tAW9ƓNW;foWW(j iWGa&o*I~kvi낺cn02p1q\Nd:pmΐY!H\d,Θ>?cl4ب37[me%UVRI"\hB LDm2,s0)IbLe8lTv̧>~m3 q`d",u8쀵urQhPkdrBd5h޵儀Dnl`i\BDH\XYIZ_ ܡ.kAcd a_+Hk( ɽF(rL)Z)0øL8T҄ME + 8pbM21"w=͍ @% N@:q.H>hͤP|VG&"7>ŒI{E: o^k>-' ct" ՌҩC#!LDl66@1%MEZ;/۬1[2V$>z@VmYcVXd.{cKr(մǰ0 aYg{ \ سvrQ.,WLv5k:RXZ%@^xkԶicɕ1zf۳-ml'F(7_Q;VjbJp _{s-F!fn0^4@ gF1=gA7GQ?d~avc64п9y83 pVbO;(4@ÅjN6'dǬ)ifҠ?`D}!Gx$qAl = ݧOReJyF?L!h96~ SK+V䨌1K]RyR%qCp搣Id ruSp2*u&-;@e9w(q9NTOq[g CiP͈=o]x4e(Z9EN;h)?ДϠ_4ri#g`09O;ȞtԓwfU:kaUZ.ʱs.߫^~f:~T* ^VE]F2aRL0 ɈTr QxqЩr|݋Q&'V"9QRdo BZSOwݸ{{3.u4'vMnoB-=FI ӫRL!&vjxup\n۹2>4Wf&%}W ?2-.:jt_<ܞAuQCC.(:lhnJ[+ F| eebFc̡=^v<ٶ;#?{+Ξrqò|,׺︛FtүzcӏJo}z/QL{h!K}R4(@ >Ia$g D_y5;x`F 6:U$숲ԖM ͹i\Ƚ6e%4]ӡAR_yu|O{9?ܮ5 ɮ'Q]P̔6I;]B{߻ҥeS5K_p oL Ш2iswf?|ẅ́&͝?>3OqwGV6 <9˜0DeM+`VJO.Q &8dLsW8PnLNPRǐ 1z/X2*(Ȟ>R[Wd9#`-#L}x,LAk?hO0䞍\+6IRJY"m$eu"BcZcip~'uIqdD~V2?G2_볒x@B񋣮2g RQӨPbI#h-q~}4H޵6r#"lټ_ !lLm%#3ْf7٭G 3ZMV}bX 1^-~PQMw%^Lal\*q}ԫZ#{j6w= }m}֟yu٤pV*=TH0K6@ER;Z~`QMVYe꼞Rg&V[;1!i75W *cCP`ȇBaQ`an؁qBm KQ"8pBQ&[Ȭ4C Tx_R~fGڵ֠!۫qnNSK{ӳlUAΈoW rF=RS:uXegfp}f-cCy gXa/˿Y$n SFcP(,SVFV̈S Fx֝SEPJ䊜1(:7.O{ɟSDSD~8n;%DdFFw~W<ʟRGhޱS^C'eF*1V8Y)PLPWGDw_]<ݹQw_V SO+L:Ct 5Z_kqq)6uITZtI:@܅sg:]oE;r?s& )'/HANaѶ:Q}=o$#w\Zv`H'A"W4ascqd֠1U+#~XUY٧qvul[5sa,mK?>xx~S0D%`C˭rr)/Rݏf6^> @TR 6|P _O*W}2s~෣p?sɬO%LfKj՚sM>[̌j"3X ̔e$2 zfb(2\ *-W 0+,@1E LA'Bj9cfeW߬ohqP TX¹L:r .(+/VG 0$̚#QYcJzӳ@dԜMen,(fٛ\\ŝ7 ;5ADŽZ!7E4'pʍF #D0ИaT&V m9z󭐸nx1R҂H!+.Ŀ.Y{=fi&1J@B XsLf[l2gZ5qs3A ~bS@t?o207B()&КoR6K#ް:}ݿ4q#Lrw Nj\A.R D`7)0L"E"Fg DګeOrUIY>n}!x`C:ߟevF֘f.4KX fWj4^Xg~ࣚWI/._A=sǡ~UCuoz|&C$8KZ .:CpX(v ee ߾ .߁%?w!3Xvj@@Q<{y9þ]udo.B{oۚǓD6dh`Y[wL̟t`0ӛu?'e`ql%k~Ng9)g13]ۉgټ?Ǵ̫u)ksM)!zdnD>hTĈNu:=k[%rHg.I2U>IuGAi[*bD'u:imLhv!!6)@CץtJR\OsCzrÓr|Vyzq8:Kg,7spN _,ّaP ߽3?F{dYP%I1*עr^s|_xOò:atyp%̾V3lp|=@g! !5CTs21G@4mNst8ce;nVi֔yxZ +7C͆rj ,}>Uf O$h8w}ZazF7werH Žd]+  J"dNXv'KBDtE*vOer^*6c$ O~2TES5(%}\5T|d$Yh'%iNL|l۴_*[Ar̒<Jl;mjq"ZeE[yoYKIgMccf8Q*G`rQjrH#i[`-9@[ u@Dd< mC̰{+LR (4ﵟ @lGgKCkncqmmc vR42)WY|,`X}z`їŸzYr: I_.5HO0N'j51BL^Y Zua;7`\7?FW_<9x D!BN 4'MH܉1A81_Xq w%&ۖkJ;t${>e(,>Dy GVCpr f9>3d"ߩ<ƅ. {w4 q;}0 Vۼ$-!vhKO;m2],\/bw@*^NPVәӲOU19mO};տv;)Ǡghm"`jj:'1)vt ;m僃)fgS<-^]|PӋH_,q1oXB<"&>*Y#ސxLƿu[T1o$̺jEr#uroIb i~iBxs݇ @-7q<]贵 QIJ{:hw1| ZeKyL8BB,#(5>^M/rXyɶˬoeU;F0T[x@AUQ!X5 - N@[ ™Bg@`ذhrƳ17T+$@B m!E aCנVaD 5jR@h :Ex#H_}5:WW\K[z65! |_*XG|`' *2"a #aSG%[n2zEfF M&@& k\(Fl cմl} S!iNސx@3K_ X=x|':ϒrv񔷐0_(j2d7n,|lū)W^>v ~&|#e Jt2g9e+ L 榦 ԾMl#L%q3h5N[rk=䒂@+fHC]7WnbF{;ĚDw8 '^ql;m@<0gd$F)έ׋D!JA`?' $_*.eV܁9bgqٓtWF%wdڵ'p-X$<3,' /H@NGd%s2r%%ǻzI:Lݩon0ު_ ?|xƬt،+1s.$`6h<YvLd~ў(V!$jŇY1$j bL{mg //.$e43])!4!% +@eTIH17-$ 5f:`ڻ{(8zmNaG)dZ1!L:qpSQp,^bRIT;1u~-meS/̓ڰ/;џ%7 ʰ8f,v妤Hq^R4FQplΰP"=k2#Ir.!%e4Va?69\,1t^xYV8XpDŘ=îa3,aeU OӸ.MLJso>]-Bw4mO␐)b= rCFWC>=pV}bVgmGoY=bh$S7gb4͗KF:ageͿݏfߙ,>LpJtg^wWG ~'>yggɯyʠiI.!|U/,-_£xGX00FY87ǓRxK)L]lF$ܖl5U kkXӊs-iE=|zjh(U-1cUEkY⫢Owi䪊VPlmõ Z8?| Wz;YCxz)NV xby]څ10+;eo?W9 *;!J<}7O(u^jp ." bE ,kȐZ3 *AFa[$ ŊCU-B@^)_0TKLw +; cRnHYp^hy9M+Eݦ C ֥92e .X:RXBK:,0G7~GI,o C6E^*̔5A%꯻!~C%?i%X)AњK6#!`0o" Ri?v*id؏uFPIqlIڏ%2گ+qN9F?^sԻ6ӛYRDRTk)HGW81b]8QcCj(z0%XSR gM!9t!nXYV@J Z$ k5fWZ E3Y'0o$c˶L,}̚OnJ[=PӒ߆σeAF`S&c2 vu?'=6nff8[eJS-(+P.LhL zOh7 *Ry#:cnGmkdBc[ E4I*G۾Ȼڍ ȃ(Iv;ޡcJYw-hAB>sݒ))S3QMi[)LgC.,Lֳ vpM3B1tt2`Cit[K%r3*,01Qޕ>q,yyp%qى+||KZ 1H0hGQO$8JEIbg====}81*1kKQ쉔11myMVr@ۻa_@} i2ۉ6p2أeӁ: gW\~%ߎ^nZpk1xoxٜOj1{e9G7S XZ.. Z~qEv\铣{mGrĪҏ|snۈ |R~2i&g {'dRIVkAPQEF*'"h0AXb+Q#Zd1HYx+"FEJ3jb,BjNYhyE`- 2X,$ykY+҉h\ԹɍP\ʓݘ 5oLې.92%p-tyF˃:;FvFSFĤilBj6$䅋hL)ysSID'bysyPGtrhx2ɴ[6u[EtCdM#PTT4S7f5+}q2"}#WۣyZE]>p;]O`anfurnn߯~K8F[(i H'3e9$ ! E_Uvh~O@bUyv6"`rw@4'a$Ix J(Yrr"va =''p!wf0e\'m(/ܮjV$$zMhST(g;Py$Cj%976QyMA&}C~{$+l~&ɡGlZOE^ fTXT`#ʝdK lOە $&Q-b֮#8 nKn eVquRjH+IZ KAHɄ tiPY#LsGZdAB^s H<8iX*2@ %H8<_inEp*Vr/mlr\VyUb9!o[ 59f D߱ɗLI*Tzܱv8[?'EX,!([lX,`(q3L޹wg &UIu$tWNfWqr:CRB-2st$~7tkK_JQTu}T2tZH[?iXKL&ZZZWg_*aR}WU資88RG?G?b7,+L {G{5t #I}\ozM??$؜ 9q@ڿ KZJܻ_.X>u8,cu'õ-=w1$dMD Q',^ۓg'q9#,U%H;dArhl#YNIɂ6XhB3=uS-rT i JX:Mx5ԫw|y zcqc\X\,>kLKw3j):#:?@U.o z˸oQ%:ܘ"/ڤ9}-mhfAO0IݘݲAt|G%XxhߜYLkBH3z?N[~&HET~)GQiV#noY4EXm],.l62f*Jc\?sOq._c8XjYzS s=pJ`B{NUo))9Uv+c"GV]yh:oٺ~2\,[A\=F<4\\9F^!@0ׯBW,nꤌsR% kxƻ\M⹢LH)ifcCy\jO&e}C*=yAW2)o w3v?}XTu+hPOӉ 9LN_ĖUݞ_l,"T(p0>8ϵd(VkxǰE`(`T|K]9Faq=X")9-WB* +fi%1!D rL2"f뼙E\ [W}^K)Yz3we1 lu#1AD >^#T"5WBij/ݴ7_L/V*^=~R/ޅ$, _m F[ӗڿqo zśÍ!InZ $IZUXk;kViX߭=dfJNεp]©j Vc N|֖Aw!Z!d?0'FYTB)L>-lNM祥aON ~qjĖ5G`.Y/2\ ^F2JGdLR|>;4zFsh5uc+Ĉmm s#!`OL|*5(0"rŰC\*Ntug; 6_sN 60*RRR<+\ SLŎ hfkoĒ:8|~gPŪC|M): B@ȁ%'gLVRh۔?A%j%FB]6'Ξ~c+sV'1aI'*6h ␖+,#s[[FRoM `X^Iʚ@Ɵh6"Zak  zk],IVJkkE+!\xc<Deq8 أd(A?dˡ5[k7js5%GmruWM';+\nemjz(-DwVJ*diа"1"a1I%s'"Hl-35KLD91p H$pH1ppv.R0DŽ eq k'";j6 a1>xl>of7ӻ^}+6%rrՂjDo&./ddW 0X̋xa(Fe\-*>0CszTѕ<"" kصy)v!dx*-RHMݯzR~ja1"ރPglz Жԣv4jp鼌Y}̩U=X$P`LaR,5ag!1),.3 `#ABӐLS_iCJPwIggXk\+ZiŠV΢ y=d<鎗I jW.Ƴo5?j Fk8Lˬf&i >:^ƌ4I+EG~~θXmoyE֨;]zvu]i,5 =q9#m 6GVs{ bSKv Z~>ћ*K5W/zӴ10VKЖs1q?nKI  ~I5k1X3`j^7;)R+x^p؈۞g{WRqs}=d>Q,h-{&(`,Q/jN/M@ c\ҬjsƐ}iүjڬ.$f]\̒&G,{[wN! CQE>CS(h,4")֩{`S{圉6f=FTi0ǔQȱ1AR~ftgstGI=cp)%#dÇ8=r$`B c5~%}Z>S J=3^1ĞhZ,HojILVS xRmCyvם f~{/ٖv Bٺ_23<(MfyfËd_BЗh"狙MyfI;>wg>|+\n0K鵎Ӷn_ Ԣ~-9sl.Ճ D#EZ`(RtÉVmS?V;Q S Xh69%"{)S-~QYG)ޠCfΖyMTEv3AȗyFG"-vg^>;xM `7$1-~"/*dAK,lyu)zqŒ=힮ʞzh/Ϭ&H"8ƶT:-,y]_>f[w04F߄C{@|[ <3S}%e 7`<:0 ǖ$Dkٚ jd#.쎖1-Mfz@ b4p̵| ؽ dDxSex@>S!8^kp.\cDr0REPcfCKp=əO;1!:BӘhB$oם9?JVIƪm{XBM_@ Ǎ9G\FfyK rhӴg=S 7 뎘"ӷw$ڜr­`*1CJ!_ήiZ~J) \NpWgf=wb3=i2c݁ d%+81H@8 ÜS/ز"FSDq.VuE:C^ys77i@ Rj/Ԛ2٭R=(cWV4ٓa\znc9x*#WUu" bAc:m5yN[dIp-ݰnA|0? c{T ̷wBQr+ ñ #g&i䪘||k\H)Wkс1bOTg$=1%~V8KsQc{ I6/ 1gS= $4g??}%^Co]D]Ҏ 'ų)戌R܃kTI@5*:轥pF%wW6#)jlru,8jwjIDHjR5ƞNp>f=n4 r/=O/'z<UDHH;6C!EMp FA9c98#eLJU p*F|9PF)]xIC9"rYL+{ w)0á$HqwHȒ@4ɖya v\8՛uw \2)^3d;DyǞ]oRi%\Y1uG捣͔d_??>]ML |탕@'~@NZGrj5oxtQ#'|2LUC+ _rӓa]ƺ-/?}{Jl {qUyf~thT b@A}?3WY΋/դW]'+gYrð_2slV>pۓ".q.jgnxH3rJ{BaT(c7>iDZ'B }ԟ5Dֆ{Ya?Vg DIa(d ͳu%WOp86pNg={Bg}sHwK׾ls1FYXΈ=O^L@"!᫒rNg6At~A| 8'U&[Ru;R^HHdQzl娭VB!Q6nc7,[.VIֈA5}@}v?"ϫl;똸ֳrmA cZ:bl˭z+1zQ2m {p: | =͵u*W@`,*7!!1v *S"|c`$+2Xΰl o=P!Q3MG ۍ5ܹy qWB86qiO&< *=hjZ!{j\vRN%x.Fɢs@L1[DKE Ǹ ΡcBՁa*.uk [Z> IXMbvJ9xN5CX  Gɼdjb; i0Vq@3~Y=mVXNG ob 3LQkHrh|p0Koo[혤e@C9 x<F#eP cz4;-K'&Oiq; '7P%5s]t=:陠  &LB jД=yZmZx&d1C(bcP RS!E ̈́dsxU=}!1f}Sׁ>{{l.lmLWݱ|' +/,lĻ= \Ƽd˻FMݎ-#-z]<&{ړe>5fOS&gX a?:RIR[&5zP^#_n7Z?ri <̇o>Ƀ N;#I8:sڃp'df"o eY]/ӪWmQF}w*W,`}Vt+ )khpf.1;$tto˓i%E\{Avh)Qy4ucNzP´| $ Cb}|5E_fi#*P9i [NEud "&' g8SⵕN>ݡb!c0%2$ƽl\ƹbK&=\sO xZiREԃ 'c˵Tܘv8aDJ9/*Sˁ-0^υ}ӧ.\j;sQ$DL=}e'1'Z$~ScۥT$cpq('*xHߒ3z.k=ӛfp,"4DIJ8iHb((S*f ) Hy3 Dax4*S.x-~E:P2Ќ@!0*%$O$^Ƃ08#""8 %Pʹ3D &*IIEqȡ"`0YLDMU{-Cwj04]*YցcfCq ںQ|0lÄQ*1PʧՏ1^V-S?zAyH__2 qr\VՃ'W"f׋"֗G"F } sWB7 w,Y"9- O$rć|HhNbTejaj*Z_>tcwTO!VmVUҺTSH }u:u֬{c{8[ӣb3s˹ʰ+*rf)兽}pT<{P!1.t;F ;%mY&8]ٛqYm³'./P1C S?#%V{h[8MI"/v4&Y+hDžB+ȋ[-4k`Q47ۯ/\fc|[gj{_1G _"I.n{wt/- c8ۢc˶K(i.&řyf!9cv.ͯ~ aj=!t*.rp^ռݮ>NRkWrHVRZ6Q~Bp n.-V.2cW^$ogKvԉBIW1cP3F(0A@1@[(hsx.ŮUw,-e9XLJ Qh~_狑/Fvb/V/P,lPFaC5QZi "`>ńqlO\ scV)ު2pj`,_:NG3e:`s1^>Ʊ5#r%g'ToOㅭVVCDo =d6:KCG?;-CX˝/] 3ЉkZqa8p%zLmR8[,!` 6| a}%tY%aJO X.f='ȟ8 \gp Juս. q(e5\f.V\Z5q9.j޲fDJ@zgDkm!-''78}C7 DAj Q; 3-mf7>ug ΫjOKD葻]_nK`kn{o]L6ZxbInI,͒l(4KMDFLT"#fP0d2ROEF*A @aHf\K&VOS(CE FdL$ C"(NxGS(R3J (N3ٖXH{8HLfa׋twzޭ4"7֗22Q:{_mPqw>x346_,WڇĸWw&b~g/OOyK`m)P̓o` |\ Ԙ4 anj s) 5Pirlt:ԃ$=s6$cR։fo/02?$6+Y vcmfv>M:FFeʾf&z2O=^B4+W%u _ȱ:o6+CIJ:L3L-,Ct !fOc;%̬fKB ._ʖJ]/|tX rLSv!Mb'ej[e M9N*&ӛ4,_kvMvz'yrY\fPsdC:/I:[,Y56؋D4Fq@tLxġN*;FG$PmDQD Ƃ!<$" Иܲ8@2wpI/'җm;vKPNu|Y8S@BޡZZ\`/[қhS! 8 L`% BNB31T*LDzKVo}>y]%v^·QOi7N'6i\/怘Ȥ6knᄏkf^%YraZp،yw̪C0vm_#R%Z$ywZE=UBX}nz jf+5F-C&LxkḂ8|3qrRK[7b8vC"R3 v'Zm3t7hqZBPep*u\T8ev'o&UK$%U{-S~O(}V,Aj,aa%߀58^ӭ{b2ނs-hhiD[b2:ATS.{+9̱KJn5NDUjTxuWy.TֆW9jTlzB} m|iu;ҫpl]7*nze.C-B%/`F@Df˔U/|>T6J.|6ɾ 5_:Q.Eγ9K!^=f/sG\ǒNmSyhUY57>80lPPyNJJn;q"I~CNO0W5Gn| o}tqPVCjb F JI"~pCRd~8EőD3E[ֆJl3zpO#Av ޡ_0;𑻁! Ǔ cK*V dG>A`~|ՑՑՑQVW/P,`RK-Hq&4JB@QxŐrh($0p@8 -7)$W|>K[۝] /d`b̢ьk9X>y͠0j.U!r#V׀bd (F6G"D8VX/s &ĨR$ B8V&ꮃVu Eb*z4h饴_^$\|%3v2WAl$W,AqH2N(*M0,aF{WF "%%E{+⛖F[b5(FAQ4,ôZz' =Qq$t:Rҽir\ikG$B!(@4T 2hX)c8I8ʘ@Ʈ39<04_U "$dCq;wQɼr)`dl^G6zRGyhXdUlͮ ^/Yʫ˼%5޿b}|{`H~x !2 |?? Ml+ge211| O;pv4^OfOv፝& #L,<Ȥܫ_}XQt<^o_Ɖ$*~a~ E!_tt8bq8PFyɊ"%:O qbʅ?(,0HHQΧiě3Di!´q(lCxCs H9 $Nz0wh'"Qj/| u  FK}ǥld;Ek^{Z{fbąL’p$ǢRs(XhUUoHBKcJ,mZC8:fRE IRN' yj#cK"JUԠktk/ M!NƔW*H/!qlKKxtŐќ]d_I/7@hG07CVv&V#bNR&(Ô KN܋ZrMkw5%ۇc^t p39ys'*&wROn $S2{s߅Ѯ=w^S'mR]؛o5,rւ!7x1/7ƾ`Orfw,(+2ZG(Z$~ I^ݪO I{Mmh5dK:R =_nۧO-#~7|;1?'d ZX6GvEI XP[1kQɳ#wnEqh5N sv i~ ]Z}wtL=3{<߸~A3 g(4 cV!xeȼ8@}/\S?JNMQjH{lWN}0 ʤ`m?<^?|=hKIɺDpgd*y2; JHt!suc?ٸu[TC7:0M>9>ƱFG7M݃z#ձ1ٞ\<[j/4SyTܠNQĸt\Ո;O 7zC({L]qwӇys`8n IiDg"ǾT ?b*52ZEUD(5JhAӲڸ i6Q]dp{F< g 4;==adx`pA'`0:T?-%&\<g%dQ\(g/%b4 %I k 0%K9ϣ Hq( LqXKav>>a3poGɁ SEݯ_{u~T&De" `Bb)Q-6F" iq%}V0a"(Ʒ#!—Pw6uVk hZx0}S{t\=Z6}U&(0jރG>f4ߌ YB" DqղiZ*#8BT* Lt+J crzh%'; DL 8}N8s<V36/LJZru@8Uq-,@lVKi1X'/bN&q /|de̞@~7D}[в]AO.YBNH^|۾7 LkR`˟q0+'ŖOkeN'3ڂR!|o 7Re[S O{)(gaHcb$׹Qh~ >{H y>JLyv*JƑI;蝄-]ї4Hmܥ2ل$Lc9b0N߾|~~N-mZQwS 1 M} w_AQ X> AQXVc#lPrl4Y2lm A1l4Nr0:6zzpwR@,7qG+ +QLQ-HG QPn=68[E_xwxwaL뇧[sq "^D7o#N~am}0JAl/7vVV|D񆫞]oWmj+"7Q8[QxОjP.biWuJ@HXjlJV`NsGd@Y@pȌ|C8p.tґ̴ ge,CX^KA? :բM|Nf]Ϯ܄tǡ0_+L֎.ݬ=*m@o=8R&gz,R?^=ij/Owou_o oƪD^~n{Nѧ^|P9w.Bȼc=Mw[%{E;'I-"{ jj ZAy!fPZ6DV}?|sȼ-,Sy3)Og_7;::2Ȭ;֩S YtưV UYh*leA0@醖e +1ŭ95HkbF0V o*.45UKK1RIh86&bF",brt:@&5q"}9FM=r@5iJM*:6cB$ hDSM ֜ ]a4r+XeBRhE9́B(%b`V)ө ͣjy%RJN%-1FcnVOή1AY)ŒQɂd bJ%H ^rh+V|"D1%%*P?p*D $qB5 X 4#X Zs6>*g45Fk3.GCXiJɹb%Un(Ja  W=͌`HdalYVAt!D(VcIĭany %TAi BR7&%!O]N *% %zU9Z$4] CɚNKynb+>_{fFDY}g0o`Cj !BI m[@i[L,0*-0RZ )xOA@N‡G@Z4n0P2 JGow)]rX(QZTuە P,ٽ=Y,"FXH{y3P| @;;;Nt~sxbt<\[us_$ TlH F˄j;Ȅ!.Yb3VYr.&<#Vg4XhR)h3~hFƜ (@b 5#P4br ;ִvl3v"CbwhԷѢuBp-fҢ{ !lFFI R*c6K.8Dts]Ĝ*9X͈@#+%Ek))2Z "zMNw8WV"$'y+a)H XMBp 2f3(| nY`Z:CQᒸEÀd9c=S$ys7A+MڹB\?%U\Вc54ƾNc ,<ذV"c'$cSCw"VOhX\W&jNCBɀoQQ4#Ԉ3۽v(XCbeBF֐#nz d I!_F锐wwGabe:mtnGt -z-|*Sb=Z"?T'zN! E/w!k=[uh塸لU {K(Zxy;"{6~T)@G7'^!ߍT;e`Zeם$)q`Sf(يjbB!كHM E`Jp nvɆzDUz@5csm'1DcHW/|a1f#4BIQoo7x%|[ 1180H{17BNv#:мH<<8 6!:v-sHzʆ,U6q')0kKxzpóR;DI"4;]nQҼpow<3`5Xw#/ǿ!L& cF;:hie7n0ݠ9!b|GHAn=B)c:1#tr#l;d/C0= =d*CJxY"pP!3 dzv3MaEncc ?Fa#BröH,cnsU֌3]]|Vs}$?\Bz:bϢ|}Xyk[Q%0}k_Ys=<BBl+BLKE4"4J㌔r?][oc7+_ټeNO^v =ټl`vַI[dϑKbXmdj>MV)t1_ޙ\͙?}`vMp *'evh|/C!)WcԡFέޝFZXӼsixO =^2;Wtpa' i 3x)cDm .Q@[])Ia6}P|,{|>Bٚ\/A%Dž]X;?!sw˻|/w4{<͘lЃfE/'E Wߖ]滫n[?=1ˮxH>s0WORZ9xso.B0NE4vdA>HN:s9^Hzz%_g{%rU)evw_0ߘ~ScnKhUE RPAt9Ei, !NڹƹYh{H4ZVEb39.X\ 0bS:FϕI+bͬ5Ы/v ߢMʐG&IuF8X V BQ!8<\kgT֑:ՌЖ ٳ31/,Hӿxtys9SQآPJ}e3R[ [R[Eܴ:[:LBᩂшyn [z>QV[#EHkP:KPJ6_iTfܼt]$nb 6`u M pQ#JKj^_[J@Pُ4t c^0 s^0 yu+XakJH:Aֈ.%H3eQ,gƣVL Cl}5ՄpR9w &先ԌFGgGP:ccK kD4[yb]r&87㮎7z=˨)p х OV4شU)H13!yo{D.eP`&1O4fY+2s &||oI%~p9{LzfO%Ltħq33Ej%?=~~x`4q蟗'y}n ajHذ%W ~#GvaR~d[bde6seG< >'8Gp85f9>+WHPNJi6c>qŘh [6\DI >'.dUyz ]!n:pXŬ%efW4-נ23F,o'JyT=[ZȞ teNJ2]핻j!(]Nu \rnA=kLqRBh;Hj%M r0V bF?mê.;?PŷlU@5JTi^eTj2vRqq֦f-jyPomZo#?(l ׅq)eHrsQ"fJf'sDDJ\_88 c-+5(7GZ/nL:dpZ30ny\iw'fx7LL_SopM#Gƍj95pYx='~*\;]~Th˰'),E$(Qydh5@.Ȗ얄4nQwIYA^W 0^ ΉX g{W`F`ӕ"ޕף=VD}(IXbIGK}ґjNm}uR`{uR-:UT{*N0eTaZkD1;PaHާQ܁ 3wRJ+/ѳ*ѤQk탎Hҡ}|SHMFף y*Xbl iuAmo4QR@ чZ(H6W︫a@ 0.#hb' fxx=M,o%pe~|ȥ+F8˻|aZ?M?MJLӬn].2&G'x8pKyx=5T1I6T}kEج\|QiKSO5+OnqǾr~L-ގ;!Tn͚FuӅT弡b+KwL_>MCwհZ<%O*8;YjYjZ,5Oqp5֣E|vӍ:=N)l2]A)ϟ)N DSMvS3W )HO5n:NxwZ!$(I k}eݾ\Em\˼k(3忄JPzNx LP9NNL14\.~"(vx 1IΪ?=  QNb 7䒗xg8 :p6A2!Ft"P%coT=,fEpN YkpzA[!Pn.,zM[lqj[q2Zn/ p(7Tr17$y߫ab(q閿,NF.xF( ~*B?Rgc#۟D2&)ݙ%Nס{ǭ]ŵ `y֝`y֝7u%9H&F)Yl ISs}e2s/ONɟ뤧\ANȂ9YE\z_ P1=2WNwЬ|QD_ y~OIlf IO2^ B a'^Ɯ #@TԄ|9 p*D1D%8-nC<#5,uWCJXe3kְpYX(XGZ,W"D@ ⤲yq TKT0kv:?A`Q:h- hu*H$nbfIBċj|V)f{n%? Ǭp,#8 ɸBHR^FJR2_}'Rď}egP_/>/Y%-f*4|GyTEUg~l/jF^%F'ȯ c)$CDKE!$NEcHyB&t*(Ao>Y" 8.eSA# ٌ#Gbps}+_bI,`Z"*h\sVLy7*HP$)ԇ(> dM8_rbrFh"i Qݸ |kQTHe8rmѡ ]LbcY8wfy6ϳ|4 !Rk+!.1I NS/Bܠ~J4)Q$Fap*6X* ! YF(ϝ(hA)rm5"FI 9wBt:F}^QE&ǭJ =)ŵ z^)nR1ҡw$ZLl& -Jt9hÙ"ٍ\=o :AYqkI%j:NL;0|iZjր,0/Y idT>*nuF 1̠qG ;L+zl\S@mP pp P89VHTq\81B)*NNw ybZCSBn!yLB9]ϒ~WX-x\d"@ːkIw!x JC4 ł'r@, ÉFf$]!$Zi=vyLN4#qm,*)xS=K̐Ok_UWk4Z/لG;#E'x N Tz4N&<CuNYHRG[kb>sr͚.QLSYpg Akzߊ-Doa9@y"CP5@Ի*B љb}=dXhtڪs=MVs3'Lm;( ߁q"@  )z" %CI2jPi eJQ'CGnC umz'/{|RC(~@۹l\I`514[M4RD Z Ifgc83ДvNٷ6gE7 Wz?8/? VTL\4(O9qtQ[Oד[L/ϲBicb̈!d}^߶Y6Rc[/,0Xݞ]v%K.2Lq~w.aŕ]~#7;cl|;~MxO(LH'H{7K,{ibz{wyI$;.Gw07?a4{~Bo `/ yQ+<ޢ(|+ oa3xpQ$3t_S5 =GQ: sQ yc%'taw-.ү}7ТHJeG* Fxп,4ߵ j;9ib4]\E|RRɎ65-YſZ]N-dgXxD)IDvT;JRÆnvcI;}h)MּAAF굅2x ' ȫx_&篣_bu]Α%ލP?M{YXӖWLSS%ፔM|ekrʖU\rմ\m$,>-0,(ӎ$u.$FDxY"İD]'Ԗ EZBHBc fU6+Yq4+Bv!M ,h34X :Ha61VƉe-" ҃5:wU+uꮓ,ԒkA;7:@Oo\ZकuUC-`0$l  |"cn*BA(@dHSBqr'y4PP[qn4L((u,K\ ;u,;BJύ)Qv˂?=TV77) }v$WH&»ՒI azwaZ>JIM5ч+(Oe~Kbс 5Q=ͮxU]BZˇ?ۻKXe< [%'D}SWc?*:Z,szrnp6;`:a%ORy,{9@Hco٤MoJ7TM6c4='?4__^7oi 6L]0h!atL 7e$j"!Fc$gDw~SV}%*KU쫦d7SVe+H E"k- 720`] BFQ4gf(rN)euE\*͔V f[+UF/2G|h%s@ 0Am ` qu)J;@48 60n_7$%y0) 3OPqĚ}wS@hM 4x!$[r_kv{vԀs}7i{#H'$x G R\-UV`_K~ u,]l~MPjl7UprPDH?IL!Ha\;faJ`Dϩ=P3BJ;֥-;*Dn)`/Rd@[ /Z{b@BN9n|ڒuB]2hs89F+ ܵꣀS`X|LһNo*d.(B(.4ۖ9IQSFAG¶!;%'E\fhg]2Qv$j21A2&Zchc~biu5 v^ՆVT>%sRx -C6B7>,couog(/\@ٙ+~Ez@n S$g/:0'!x.Ghu?xu;#xƟqw/ξ…%enفsFO8wZ˶ζcyY^~lܽJGy!GP#zx$ qYky>w 78ӏ1]\>tD2Ɨ8<$`  \RfzδV7{߆ W!p O";La.oN;SIj }e*$OՕAjOfz?ǟ hL'p?=[u2m֊?`4rO@W^*QTeDuoQd5o@i!猣{!RSʃwƨ) (>!2g)z3N: # n*&Cd.DRJRFcVLB#(Y yęx8//wg%>l'\=bpj'b\ q7ؠ]ađ>gr1-@NZ},eMd 8+V  "1^u#=߯PRpQBR$)GxT$P8JsQ<9/^ 1Joۏ[9 $ĸB9ZNچXg2ֻf9gA &rp="]g 6êO'#薢iϚ1bCt~cC@50x17 xUIqYgUR\i8G{gwmYO_Cjg kTkplt pvp0˱b!.90wk~v<`Sy{i<'y7ʞQ?~~h?N|>sX^F=>ptN=| VK<{xj}m#YKiW!RR[MRq/jnI߷R2$R02Iחn⟋7_K!>#6r~vpQ7;R:`rv7?f  ;aj5/t!fq f/aǙr'ϮNgD\۪ڐ~͘em&kjQQD F<O16 A6]yFab{N6 !ɚi _0x+LfP&s@[V}gp^t{ۧ3SU;;IЖdU?Ww~`4*CK(nqVLWNèqV jN1*FX괏;\|ocؕL칛]AϬT!Fgnuc]LDOGOe~ky0ת2&Ց]mxqM].%|w/0q)`655k3q$\1@Ӆvhp>Cꢝ6i~W /jF#[o( Rޮ*[Ѧq 8hL%* # "#O"WEzoi+ik剡ECsSC5V 7De]&~fH ыYM0 ?y68UyDjR_9؉hz Rb9upNÃeSzj~ټ%Ԕ3u󟇞f}H3dôU5r_=C!eH(n[ RD`E0{ܣj7qG/DM ƃ5e‘lxi)ruoyQA&Mm*<1Xy^9:Vt/`[ڤ%4CyW F|,esB?rHՄ e3"-GE{d߫#uԴ?\ /@a'xH 0\ս0z^ A&ffGR#?ݜvӓ7fNy/'i4,g;:Oa0g{hWw/ ?qJOq1eRcl5))=ciLiڜ~|sׯ>u)oqȤB{+*F VY,LM EbQ̩gX!B*$9߼P!Husa b,U98҈`XkY YR9eDHXRsΕ<(#$HA@&A 7RIę &-CL{+,w̔YN:1{-F24%x 3.@{!H8ǴpK-H2^x bqdQmcZ%YA97P:#<4 @SJ#ng1V-eي8 I WBHIJ`X NȽS/@uIu (+UBISZ4:Oxc{xp g5 r,w]-pۏ fy!܅枵+0g|r&կ< bS,~meG&Z~Lh.?a~<~mY; h\> >rيqj|k[q^߆*mXoָ!߳8orϖ0 @D^ P )Ai!AhSFXݥesvӈu8n,_ܻ]ot.;]+4ν[=ιG;ݑs=׎;rc0EUJ @σdS|H8C2y YA0ȹ&ˍrv_EK>XXwtl+5\XşviOd^KCa<,tּDdּD՚(g̚Hݚw0BC%ƃ喘ZSw_#JzdtX%$W27DG N*D;pG-"ڨ]V"tֱNSQ+D8hwaŬ*юJQS)6h%:T5X ӣuΣ+'l$1&ja]li ذ0I2llq*|M_nM+DL|X;x~XnRw?曫?η)Rw, ۑz/:h28S,?ObL7`-!ۉ5l'y*x69RW EcxJWZM&XT-"2ڭ;hJפ⣉[2p\M(t -"2ڭi|ڭ8FQG'T1>A։}F>kJ"q[|FPAV Ex*vDZA։}F]<&biڭ8Uh1Ʒ0tiEbPha|F)ۭmJSѢ[vC΢Q<%k7v[,EX'e[w)Me{[|~AV ExJоv(lX N3hR*ٚv?SP!gRjǚ[݈[,EX'e[w)MY{[ٻڭ8FG(GabPh6 MTЭ+}hT8'oa 1}kU_zrV3f͘=h)vFMe|Z%"ܷ30|1r̭CV}Vyxc^z]lq٘5l) (ٷ-0j1^I&f"_msAjA[)c7P$Ȫ{G]Ƙcy1* =1Kcc̵JU1f)ccVI\j4Ƙk_YQcU1cUaY1</Ƭ9cy1* \0ƬccNI0Ř9"B11\$PD{cHs͒ ]|ccuH'tc1<ƘkLV01't11Z%Q19Ƙs]$cUFw1f$ccVIU-:1spncc̵Jw1fNю11\$}1SccuJ71fb̔+޿3܀11$Ot'& (_8 X8%O$W&`O+ցÉĩqBAxq0-!=ӹ x(aK(Ay iJ 7hĎjSjI€y@QbŜGެ6| MWxz=ݹ7g \ Ág+c6sno.&,ʸň\~i]=+/Uq7?LfC ieiuv׀h7 (ag>i{1[z/?[Zovb5gS-NgЫO;fvgOx`lQVLOߞ|8PLL'=w('hv^ ,4ӏ> +8zқd`5⣂9>IZ+16]j`,@ɩÔ lA׸xa^ |XJ1K4\ܚaE#6Y>\?x7}9n鞇f&ik|mf^UJCJ7l$֓Ux=2*o_ֿ)ŷ 2HV4BtJ )}v~4Op>! -(L>%ا f~uE0Yxކ8ذݮ<|LrIVT#fBrnc qW!ܛ4٤ n6&K|kvxrөΜejӸW.bQQyo~{WƑ a,2 >[-CNNcTx8q}MRMZl"ţYU}zU]!;a0jH ~q90]]\ (rFI~ vh:0ɛ)Ϭ \lO 4f\Tmޚp4~pvpy$#n2܊`%gp!8titjL [7 ǽ~ cs;lïީlb_mJJ(\UTEsTiNJ3!п6n WZ7.S/'OǏP#_V9@7}Q` NJ+V2*dTZe2n۽LYv"Z)1c Q hF)x:w@hvMm)KGX Ab߰=ooR9iioْ﵅`Q T8 M!i)^ p@TpmN$&s֏u _c6RӤY&KB >L8F>Z=GidIZf"XO!(bXCFj|^*-r$oLv>˱eTV0 7|^}̪J?ݮY[gLՈ1u[?v oB}f|LyZoXcbl8 C:yZƫnwiƻ->\B蓃Fm: QUvC /l[p0Btʕ"ʉ.vT-0rxn_8CBwԂN؛P5w6Ze!Ղ3KzRο*n9p?s6m|9g39qZ,kC]ә% ~,`~z.uuw(.>NK,r ӊw >7 㪪+TVTd.nP\T dp(.a"HC۠6i-:Ϥ# GçO^!D>RR`:TۦT]E89¬3a,U߰]Nޤ16)',, Gdr,#ԃQ\f8Eds%}L铅-{ΰ(dl2"\dr('l/f\e _dBYpVgoG~2?rK'Qa4=|h:!x~dၵ]󒐇5/1)]/ٺ W7P}81A{s^RX_(koC@SO˖[qҢ\?lo؟S/K'CoqP -\Hg#c;%aƮ̙BBCgF7' qD018~7u^Qțϩƚl-r3 ߃>fBp9,c9]գ 4N"#tMV?QtcDŽdqheTrۚUи:BJi7k dwȩ-*N>a 6QA +2M{aUI@U_] UԢ~m ηx=p_ 5ˇ|@>cXS2 XfMu *CXMzEQR@"cL1c2̹Dt}W *C5Wh{N(X7MjN#[,o&KB1>0{#-Fr7'wfl~7;e5.>jyQ#BW9RU64) ]:d(ԣ1 jS$7 $PBmf2eÚMem@scIF {KWs|mFFϴVQ׎דiA,-Ľ)ݞ)ULLJooBWzx'L>*]D8kP"$?S8fHlm^JjB{G3WEvӻYMjtշX#H>^mЧ_}Ą(#(JQf2Gu:J'ْ>n4eE4e!\>mW LD4ٌXE]L@φ==4 yp-M纔 s!vr1xϋXP.㛳i✽mTbojփXIz9hwjhc$ .NRW)ֺPR;pأ&t҄)tq+# X]NA;+݆V-طM[i[iU^D/߆FIBg;na8aBb)-F] VFd p\vT6.P-v|?(-wIA+RA+eax0ӌ%r ħp c -بh^I^RYvTKvCɝX).`"Q)*}l R J1HE: ce FpbI$8]RgQJyr AjDc4+_YvTKvHs*;`gT`OZiꭕZiUz+ gC`cLR&"LaW[16""4nGnsҡ* j"M(^Mkg6RZb2f?E7*U\7eF&o1ʦ*46 )(a]<I[yMc  ?TxY.  |R f?K3g;#hdqzӇgpiAAlZU#8+LDA j(H ,) =%sGa*uD?t/M1軩}# 9GÃg+ ׃Y6}g'?4,$—g'!xzIy9 ᘟ>~}r[x}Vn{nv˓>M_=-?7jg .mǟ=?}}z~|g[^x\ ȶ]qܠC˛zu??oޟ?{E6WhG^mZlm0bwŋe4{Xfa7\mA0otCgikN?yoO^/U7o_?=[FM wy9>Q5_g*kMqHe{˓_@^z7Mɯ:¯?^שK| >2/? ZηϏGۡB<4?Ŗ/^҃O .]uxͩ:,s&ϓϊ}8:I~p^6%p P)*>h ٝ=H&=F!@P*GGK-hYNɍ]Rv1C@S1ƀ QEkxBX5"* D[h~i_a̎ ØZiMFT)DƄQ'>_N#A(!=I"=yˤ !2d[gw2uiHr#_'W&)4*9K#IAS蹤@ kH$9i1{ K.wQ-`(D dRB*m1E,04(0ZNi3È4qLPP<9UFae:ezYټ_)ݙ=RD$A͵Ր( 1p$W~6ZItwdS%Z*P 0p,.!6u\Bm _"d|̓G,&]e7 C W@wu(־_ˌ;1I)0Z)pR#Ai$`­Z[h$سHAXfe!hSDb.@Hb#mnAAc-[=ZƑ`J`c*%8I[)1͗>Iz2Yi.JQ!E2-1W: fQq.RlHj=qX?Mf郰 U`޵7#s7Cu*L]vJpC]s^\wv <ˇ~bpߺ7f>L`h>&gS9!} ~獧[Wa.ݘhƇiRI OlfG&gv~J%wG>y~)pz%!j(pos~YV~.^|Kqr!$/|$w{Cp:^t2;p Ipif힃{QD.4RڞOo>o'B`4]5B^v{_8D#'ZM秥 w'f*.(ԟ]?qw:xKЛw0#%<FbP{rW>[0-g_t)|ؚ^:jMoӶނq9Ln9Ow/|d;}q7Cx8<_va2I_|>[}p|0ç,(_)3;AC-u.܌&a:^RZEQ0<ݗOVQ `Zޣv;]x -5+},]}>:IV( ղҽV۾z?h?wCg̈́s&^6 el.o/XW[j}?f3F_fy5@3f KP394j&W2QBcNl7)WS03caDkWp%LIf 3wq,arg.rF7 frt|\S% H׼uRx.xd8TP'Ҡ{Ւk:  {I"%!#BbzPT4ѱpʨ*(YFsb6O,Ì>qˇw338zlDqjc_ }{{VU>gPشuPLsvwM$X D";ޝ=mE9'T}Cޜ~mbhA9㤅%o5^^Bz58ƓlLrWlcVn\DX1P@Dόiwǟ QC1FZ1 Z43;pTR pU60a՜6 mfBg>XHyߑK 2E咀(@05G\!* &J!\$jDIED QT뼲\Π-jSH֐KsY#J T胶ݾ`RV(UTX1%UbAÉml;4$e)\+^ EJE%smQ*zdR<ڍd>iFLt叁$)hem*YUSą+F)U#ZLü3ۢ6e+ Kd;8Jе9O.v`r3GOZ ܘkPj [dC՘E=\ҝ34fkQi` ӵ`:N'Gͷ~nƧd9<@pGIfẦטaIXd6ԫC5\ུ=eR,;,$X_yԜ2@=, cIZ:C-mh 2悓aYr~d:("\HNZ]9cRxУ$}s`(j聋GVZeU^5O\[üq%C/ۂ6$BN!mw]qʧ!]g?!qNyq>z\Rg)Y)WGAm̶1O3G_? %U[ڱ(Ry#֚wH1_^{ Hn]d4˂q$uQ9FJX$Lm9v_[sl[Gc!5UE,S7Jm*ǃPd}5ؗ#sDu>P!+*rX"'!7a05eR[7Hõk6ʹ7ieT 0` *?˼r"Җ{͙eDIG1E) xGHd4Tg|Quk2ϨDlgݣ/U)EK HSU^ZVY +X"[K.یIDFĨԠ/FOQ(nĘ .60 Eyc1Fy)QjlkYV)dJHWElrPk x#1Ak_'!Aր61o!$YHS"Iq%K/WaqZ=2ӂ.']JFHĘ*t LR [jN6àaIa8<9FR+\V%ۂ6aNzX$/Tȭ5kQ>}uMWרƋ!{snۍ0I31` [Z=sacq9 @PͦW[S9k&gA#KeW]`d ,)LΡX,J&yz)"p6"f-F*){l٧ψЅW@5DWhE8R N  t)5&":TJq4i&@t"[!U=~_ތokx(/.3IP% )&=PD ]KZAB9o{NR缸htϕՀ*=4W{9k(x<HY<ښ,R|ĤTNJưâ,?ܘ =Q?;Y~Ph4я$-e!4AGuH>aTGvP6GTQXP(T,4(YL %EH3+M!j8C:кNUض<èNm̿@:l8وQ]KFBb<!`C\#O9RXϩyEتm*Cp_J Ն Z_ u@#Xǽ SVQ+Mk(5s/%!#0-b\R!.!ns0}13 w3," hҘг!2E1#-X;0X )9gif0)!<Q@IuGYK^V԰r5PAs<@!z3+`I_`^_›K؃Φ?J_amʐLdt\RDs{{ۍc^\Ch>pg &' ձgwGl#̆]PUks~%oӄ Pll6q>]_t !wf .]#] T-[$_SY'*CJNēD&ֶH |y4g30&{C6^K!N2ҝ Rr v8eY8N7%m(Sa`S16"AԱ:1B2g(u0(u=ĸX }JjZ!YWzBIƌ(hdkIԜ2Xzx YjN1=<ۢ61 j>Z*N~"Lu>+XKRDЍ0K$-@ [@J٦Ғׁ]-4k֨Œopofs@?FʽEk ifIA-pյ^T8 l 3λ^vbA?WO~/?ӢR? آP":O9+B3P=?\Pa4Y͕/Nzw~c~x ~ 5m;%2~mtwq-]w[!6{Fb,Y ~ q)9-bҢq*֚w3^}xIvَ|zM \Wfst7,"Ѭ`7/B={`%7 _Z_|bg,S׳>7d Fg삑ŧn0:*iBQ1rsSqo$ɘwY@C$!e-ѝi=Fx&}&ӻUkmLGī&Ӡ 3-M@@tth &D h7N8 `7"2 +?+d2.Y^.r/^ e.Q˄Zԩ8ZxNm*zmbi_^ dI-&nş$$@ Nx*IQv~O8%"G]D**n7.XI__ήYMc$2*17(WѮN03t<.sDS PNAT'*"Dt1TY߸ BOc^0Uk1OFWp6z U‹wE/6io2؛|%_i T3(J:0~dY3MEWߪ_ VX .Z.7z>E{ 2GSh7NoItsףj">B *xRc 4kU$!7ƣwhV- Z¯uFudjzpѾq[ ҍ;YJ﹁-?:&zyXpjX͟OtPif|O)0m`i_U9bU<js:2O7%U4!@_}%h&Y^IZKDŽ]l^RZ%!g#i㳿mJSY 5*)Ua/d&eBʢϲ?׿ޥRˬ7dT^ne6S{@:_ӌpʷ2LJz(*':t_GwGl^)*__Y=ϗ{O'ƥx}D m 3ORy̓fk5 "J nYIm)E'(QRK ZnN[xFdžVi.;ռ8f CD[HTIbsO : H,mOw-tL~JQ!vmI /ގ4px,L7x͡Ӟ=^=0ihyr' ١cu3⎑onqA|בk*y1*2;/Ǔ?oƻ5 &@e3ҧO,˞Yn<ͽ[G?V{N|?  djVe/vzp*k}JÌ:43f!.@^:':1=+hѬ["5r\"Ad"A U H*{Æh) W@m~npѴ @WjFdH?&tY߳IzޝIzޝԟw:#JKEJq, PzHa״T}wn]np0TMB+/4~p1^i<9(-|1z#ҥQ8%RFs!5[Fy0}_n"uslXa>gh⡩d!TТ3"M;v6;w?[b!4 ͑Cfi݂-,k>q{MEJ,[ g&klk=;z{XJ~%\14-2ɂdl dHl(o7(@kes1OqZkYHT|(oˁnXn>e/N_NVTSݫx# 岻wMHK (/~3!#@ 3j4p {?DJ6 %z`!y^X.&:,w[EU㗑{BD!K'[VI"-5j5\JA^94O@&Qt4f^ejץaG)tZ׺$&b}cc2<| nڑm #I-QPukߌ;kLԲ+C̯cgi$7+77 .ɞЪǟ?fW7U&3aIy1$yhED9;ɲٿ QL<(]j:U;D/4 ݐGvuZ)VIV^G S aH"c?{רݠǘv݇;;? "%ޗŧ.}5Ѣ g6O%p'Xxw?Y/Qy z~"cZ'9F<'c4 ~H> )Vw}Yj{XQ+VAp3+3ėԦt IY +kn#GY82q8Ogc;'o8P.T{'/PbPD]8ݖKU/@!Q?fW!a*||oq!$qf\膬>17+3qb}{o{96QXROIwOisΰhuEAV:9aPEΏ %OIiQF1JpUwঌ/-h#7RmEPv4Rt6tG\<]$eΧ|piNh2ݵs#uc3j̏q@~ GƼAyw&aF‡AVqo0IR#c؎ v}$ E-[YvuDh{Xd ;ע'ϟ`R.]Ѯ"U##'`ܢwFTw3 LДE;sx\S,K8@w$j%(U2(=wA lmILW!E;E VKiqRZ9GdU_4O]vYTsi_ ~Gҽ|%9Gc $-u`J3U&b%NZ8c$,KP*O|)B|I[n5el:&+S!mYXy9^r3h.i}4<%4O8l b&{/A 9\/V?$LGf4G6' )fqk:J)J5C菷wJAmw?"٨{$wNU=ɝW8f ')#1I;ٓImvȹζ-%2/kz93X0Gf>3*Ck%78h%b!cBvR&XfjhpK $3ьPB栍,Vs ؎j*_Dd-=zZO~ń =g*qRY كpJHbst+x:C+[%I!JHh{ yS"T]u j' I%;*yq-gzvn1+&=NE:נyG]` *sog._mp2f`6+mşͽRSL Hq/f;os+@“C,;"cQwsl9>hFKxl|cb$] vQՎǓeKVaxJjE[1|~Þ I/5 Ua/ _7L"rIO#VnratJiaR _Q9-*(ln9PV,LCQCH7glSew oÝ@Db%Pnih^(/) l9=eX !.!I M!\yRY6F6Jl! ӣ%OO Dp .~soB+W=^~%HJ!=Y7ĽZ:%m87̓)gDOgo^R}ChU1~ Ё IT0`oŻltσY}VcbA o9}K9sg+)J n7=(X+ \{_BgνmgY7qHr VX[P[ӚK8(b6.a#rTe:SAø)P^AUXR/1>̎(T-*IjA!e=W*%ǀ%s͕!E"j:4 XF2Jl3Eq4A$A%QDq:՝J ޽2TydNgsZKaxzh}y)'?MgΏ̹0)l=uΗş[f퀘/gׯ_JY/roX 1ν ,vޏRrn)8>QGQzu9zi)A;rQB|*,%r_Yz%ߒJHpr7q=hlZ_ﮮ߮~z>j+d>4n)%[mht\OcGejVs!쐷ܝoWB"D C`vuU3 =e[h.wTxGurT{+&uS s6z yCvi(7].36fĤ06bf\/Qnf߂=X ;^[k+C2랛ޭWp?Hƚ-y\g׃Rl9=emCm߅???n ͮr-Zz֢gnUaْpMJ|dGi*[*1&mhư7^Cڐ.I2UĹP1hT bD'Mې-(۾Dj6$䅋2%}Z, Qtnןݭ]'U6OU69D2޹`j~4~w|~HAb2ƅ;r<#"C%*5W3yEϷNTs"w2ג," gfrTwߛ'AU6Lfwxuq|U J_m==z:BIБe]hgz"K*QҸ_G5(F6#WՆ%vhGT5IMΫ4 4`0Q8/] Jk{"Kl$]֏+o^1cZ]M&qZuc玻fIg7κ$n- $>^iGgh+GGq+yN겼k =,&i%GMҪJbJaqL}l|-m/C0͝+awʝ+$%KޞCؓVYRi84Kڱ'Z!"Du0'zg3agS.{+=9ly뻑 K/W҈b^_gߛz[S,yC:KjsW-o?I!t|)%b̀9S@ٻ6n$nooJlg+ueǻ_RbzE9S_cH×PM'/L׍~v@ \:C4쮜yj˾#ӄ14 7u3^yKzvS"#gT#|,C?A?5y |( I~0`.9߯9 Kk{ERL1@?|0?VKI]S.R M>pyYi.j2jJ܇6 7LgbQ%`d7a$Ot#R{g>WT58&i ջrOn;Y+stAoϏ`{RJ@+,Yfw9qV?*\ޣW 1qWγŨbos!ˏrE%U{2ǓEIJ|-!;R.s-#uw}r ²Pt.LnuL, e-|X mKwZ/42CvXSrQC=wEFdsn{SJXK0I\ ヌSX*ozV)'N%ږ<$hx9?- {ɬEנWV`Ӷ|_ '%1_ Lp+?]UA43tp-'ˠjgi}:=U.ueշl./yœpZ[k[$n 1mMঊ"CβZb<`൥-:U_.9)ژʙv>n>LюA?&%U&ԀȕO02õ)4g='|' 끯3Mv|ӟ9#l03 4wt`De&JU=FdͺEcíZ$k]~zȲ,gl 2f^wd[M)ea'r[J$okz0MzI _RgñO磜Ӓ2F8_W/矟TpWr:\[EHp|71h ~ !gߌ2773__hK@R`C1e'Sus7F'wW`4cUU l6[ICS4ȏg=gQ9Z7 [?H윉U&VYGY01 vӈN((^()6<|8]pA7g -A0$(-V케LiSiFk\|S9 p42b8ƥ 822!l}—0XIɵU>EK״D-4i|mo?̡ SYj =/ y^e1@ 6rhuDᥳIgB2W1GnR@C@p?7E8' P^r^O:f9T-'eiodXsS^@!|sl붌 7Ez rj6MJ$gRF F[vØ;UYDGVJVYgEB jD x$JE*bPGSC"ƈ+:Fj ݪDlVڢռ>"m8|s O~")Y3_V8n'sIdr{vz T gZ}zhrW 7o:_ґqwi5H&MȒT=.2oM2I93|}Z SV>B%N9e\F6ݏ3Μ5+P+9Q6 s*@;'%zQ1` &x"eq Z‚B#ZUtn:XRU`ഓpM,2Ifɸ;ʑ͜PIy!yΛȝx Ү~  c QE*S@DR foFT*H|ӫOLr0νf) ช[0!ASAq%J6ޯ1_f")(B;8 ϜSvViP&KX3)XjIM'"J aĵꃰ̤  n20W2/)8]A :"NF3a>":]QlĥuS^ zc?ӠqWcyfO@q/_ˈh7A>7V ѱY:(^QvGj7uStynI%3Oʒ9a SޫےOa6ڿW_K5\)hMد%;~yfIO6JoJiYլ/ezmd zoF׋l@?{6,nG9ʀiiRgV0ubIo>C_Ƣ]p8)]^aSr8Ç:M\&皑XH(K[XJ0WBDphg`-V jgCGLB .Dx*Gk$DUT^xLU3$dܬ飕>Xckͪ8}#W8؝cgk7cU` rq_)C6\| Tv(oͦmHΨ"@)lnr&-r J1HC@ETdﶤ{I(0mI E?,62 JL:]gE-4*]Jgp^RMycHU'U+zKQa<>?i t;~ߛbpOPM%Ryk1JNǗu]Mhq/n]4Y3QI>PvQ$8K2^J-997[e"UićҒU΂-Iw.Fo9bCdKϯə 0!ng\L,e?& Гj(:,7ɋ}W uvry~rɎOGCw:{iB!+9s:{m *I'wԚr(CA35J5q]!UreRcD/"(rEkz6!Y9o/*9cQ$YڢoyW TE#U.bLÉw޵+^^d&ٴ˲#ilOߗ-_Ԗ-5)RI_ıD~XdXwX^__.o`p߭"$|ʎ-FeJt< *ηgeݷ&cfTT<0uj)(r{{D[P7F 'Kt!pnV<kER/RHt;|EZ5HТ=1_:SG- uf1fGCtdzWQ&|**ҘO۳'eyւkqhzu9igpT~g5bWq#O4 SF`('WN{i8fBWYYn\9e03m) c-J8hL*q8_y0Q$75ɫI;3g]珯MT Ip`Q.ޖ5V]mpX"@`+r}m!7N5HVv3I6t_#z)HxwחŢκYLus׏uJ|+{sk^}Ӹ ڽ'ye0rնԲ+qK Oy{o(gE~0uAȠ<`N2)Tn^U$jI͆曍_/i :FtqF9iZ:ZNeͅ)3*W##x}sN0ʱxN5P*~ҌT]C^MJGp u`zZ쩰H~ A$".2r`Wx.z LanT#)BКq"$ U|VY(e3 =1Z/ (<̜\xas0 =˨LMJdegckn^׮6YWpt"}A)<؟@`Fc׫; 8o2 pQI:\6Vg9B?h'Jb2lΎ GNET+5+5MN贐C: &} pՎAw L|B5{ ?nn8gFVٻ8ɻ8Max%6gWDW>3tWY*KKԔa1KciDYɬ̖Vf( 8ULFcnj2gM ލO:NFP!^UƐ Mb @4*}LlGXk 3izmh<\+L6ZRB Cr`7c) 9 N}A1PdgLE^`e%b#1PTMoZdoFO0^ȨuXgcNП!罔QlQEmoESҞN+vqò) qdmn*\RY8iRVxRsh)hx>GtX坴(/8Տ1^) NUUH/Z*fىf)fǶ8vZ eTAKu~<b^e#_//zv= G=ovS:_moK5Ө-w7:hWכNh/Kj?8HA^Im.@)Fv_~_:ȗNtL}; WqXq[rԈP:?7mT?W6Z(`ag˱gTZMzsp[;^)ܢ伏(J" NT`<!Bԫ5 dt*/:  ꄟX)Q}x\^3v8v$t1pD=g/dKL.{[qW>gñh ⌇6lpv[߄}y] њ&n:8OK%^lٔ5o))XԉâW>ƪȠE-Ds5 qgD3%e4]s8 ;KkK/ى)W6o p¦S >{+sr&hu]ŧ~fU{nljRiInjrȊא$hMgLY H 8Gr3Pwz.&ޢA4PD|᙮z=i^mtHqi&$VQF Vx?^x%~!$}Qt4ݺqN33yT`F M+0="t;1{7e!yDGbƮFc>pƒd.@}<.(i,mKۏ/c?^Gi&0uMEG>#Fc8m(GĿ'𒺏r2ƛ\O0$U33 %ս`VvZm j j'B%>iidRZg-2Y:RB ٥x.|ᕿj >\47TM:%o%KJTR.35@Znb7UL4@lvӎʻxkzzD|)Ztu}WuH6G]6.neFbTȍqAi,8$5909VЦ#B^zS E (U.)@f)>Ogѐ%OY*OBNx)vuy2r :Dw3T1GfDRd`XK:IP-t}Y2KJ_4ak޲b9MJ^ǩքp+ױs.Wj%{vE:.E-.ԺuLw}! jᇍ" ZG1NJ[/ S-3@EF([o:& D*j7PѶycgLqHHNĠSؽV:5w]#݇:R>Ny>Cx"pdc!-5B?C{>"J]+*u R~ p06e¡X:Ucvr%"b2) pf 쇮6&}ZE}Zpeeib-颅RTޓQ/RKdX+qpD5g^8_`$FV,Z k!ewBqD֯Zưo:rZi8k5SD)p^ʥkctW5=H ,،Yu609T9*?o:<u ڭOY.[b)]AK뵛v¯[O$d5ې֊ӣP N|X{Fh4aixiW.G>fl .+@thRҍ9z}_//3iNtʨ*Z&3 ۬uܫǓ Z@ʀ`7 LKYYJ-*gڛ8WQ9)=%8TTښDHDĄw=+ VPX:p߷Bn dӳ:yXswyќ[z9wS2[3?yg˂3.5M>Ylƨ j"|w7skqR0MÓQps:s>nϩ|,ci̜*c^z2;+µYuJc6͋;>L p?c Ӌ ~gV;h|Z,%ВKZr%ܮ (la4x&.ԊUF}_ w.5Ԝko E=塨)Q R(|P,֏o*BdZ \vut~:UhQafO0X w~avb3iP +%6rz72hj\k'!èr 3qa+r p2JVZycZdv]e21(9Fԁ*eB4sƂ]+tщczx`Γf^I^`^H-39}-u}`zZ0yb%#@Vr5Yo.Ʃ N20a> bE쫒z% u_"lG?toN:}W @+yKV%Xs\J~܁Jip!1ґ<눉?h-xrѠ x8X`sZ#MR9x~A{g⛧*y⛧*y3hr )>yjps*i.BN@lW3\pGͯvG&KNj`P|1ғs&T ͯ=s ZghIauVX ɯ1+,=9nV.8@·jBO,K)a ;WmU?%+ hWQ 0OYY^nDVxJ b<$t cR( f ^SB^S \Ք:\Ley:;HDZdPaL;IX{7YsŢʈb'p\D4g%T+ɠ ޛ&7%,ѐxHoOu,BR$qQ,|^91ۃzV>oot~{󛽭]ni]7GF^ gz>4:Ĕnz. lM-j88=)N1rGoai5*? O؝rvdsDvj݋nf폎3 V_k⛁h2_HZŭ,ԕY_$f8>=˄C߳;ŢW d8uEtja@ 5~f͒>@vZ'iF Dh~Mn a À/?RA<Ho V5Pٵ۫ :TˎC"_fSͦ*ш}}IҘ/YZ4ؿ\0tї,9˜#^$`(D~"kt3/jKB)QF5`u qC%p+$1ڠ4Ý`њH8ɺ#-Lf.2D5qCb {IH&, G0FnAWaS@N> )C(Q(<^ LV":^sWL4KL4+72&Z6oXsYꠙYꠙ;hVl_ JpL |㌂2`{%,,º*7p0GA@GU`q\EFҔb=:g.U3L]ț6n%\Mw[q魱:JXoF- xJ`|FIJd<"!TrL4 RԜA%Cŕ P$ FXI cѭfYs)Yuk1 -$^)"Z8㊀m"M5?brIdb\ `ai`ahA<@]{uXXd뷪+g RWW ~,tRi;-xeQgj{㸑_"%fp :ȒQg߯zh%wOO$YfUydU `6%j\g=+1׆>>e`SA<,*2Um?{GT$aCcʏin~.T/_`'X{@LQ2ց]E.($KBS%GhQّ6,&7R=c$V&$ `Yzܡ}]njnjf?pȎ[ŵJ3йQvӼEvY:6~Rju?yP!hwk5r|gj|1 x-?X>w;/^:hT&+oybA}.#uÎ؉C-bcMe>|ALuÿFBL_o>쪧5:]PwLC 2+_ƍ=Ca%41ǚg8@ 7(T e1p0O.jt7䈓>6Ry>"u07#5֬0" Mޛf# j4O/{2;ܟy޽#)E?b hZX`"7ox#Hn> T+y!y-T0y͓q3HΛJߦ҇]%z-37r<`#sՄ4r6.gԋkہ3rWn3e@K &ݝ2s#ǡ({CTRWC |@ X8/eہ ,v,%hA"e]ӭmN{ &|Bދ[ogu,Ԟfּ :I$$)uO7ҪIS{IA"i^Zܒ5[4')#tTJj;2zjIJPZZN 3$%Vَj-'ꤲv/1I߿ZB9'k"zr2qvUKOvZ]L|75:Mt垂k]ZX3J_ {d>aw)ºҺ(g7^躖^ًx8",^/OcIyxvurUQ1]O%l, z9. &H/\,EB\ VE~e >&+&A0QG;l8p4'm\h.ymՄ.jUw.x*ѧ6;1č6xCe 'v0w?G1HZM ?v61Scza7:=In|?*JMSjO}5O7m0o:M0s3m0jګo Ý:l9^8ݩ>mvց+_Am?aZj- zۓIY"9g%K**>9I^icuF)NDͅN!%[ta&3Llgb?b;r'jYpZK@Rvpgӫ{Tg >yƳeRnk$riUH UVE2IkžZh)(vEz&{fx/iu2̼g=3١Wo'ӱX{g&ͷ\g]ڮוzyaϮ=iՒT_@)ֲ>׃+Nk!ceA?t/AiRA KR&x@x1"){=[,S gN iœBsZhN i6J>FxљR\|j(p`a!dY!ЗEB}=@_ՁBtk{YOfEIZzQSUFYVvaU#gN W+/`d2>%ZI]o+cv?]|Z.W*~q_@;ecng)kz(kBї!lzf=F-j3:YTk(Ol-r0'Y(9v-0*PA+j@+J1(F_>9ۦfEo ؀fj.PB1KQ=AUc%ф 8$O2y ݐIO|H@nvMa5LW1ي6MZ}ZTcV.YR į.r|z`tte|kSUVvٿ2djKK!8Ph񻿌?ײ'5%#}MFzu , !2nwh6Y%SK!X%gGczpCS iqz|03spw!]VD61yv,bgf~^1frR$).hŤ u1)*/"%]e=bLVIHq|)̊)*͇7Y C| Թ]dzPD/ºs; 1 aP3/&Q*%k=]eeIGR@Mo2{Aio ywRw=< &$E?o،.@&$l3֋65aeerL͊T ldi7@p(b L]8*E#JwKvSXf#vdh "-ѹM\ KFKVB -\\ CvКF]Gv#Lև1ݲ4&iQ:fx~4u)@:9EzMy_CLIA%㌦!&i8ʳLJѤMF]tJlN$wM9wxBq~]ִ}[wILq8&YMĨÎ VmF5dD)Flsng"R[q168-;moY3XY<4HDf\88X[vݓ!OHp o[¼W2mp:IGUa;IHr^{ nMVNJX8$TH&y#{tbj7$-ALrKEVJ-FJ׳"@INk@S ugfn! aܶ!a`m=PUHCI{9<,&Z!tj;j dGlAt-񧒒Ő,TAYlLjw7HSH(jOg#Ԥ.Lr,0E]?벡s;9X( :hlKPm=ɘ:kv7Ii77=`_CaDm?lZbJjrXgr0uq̷?6W6VNzZm=-x~ZQfs@Wb=uǺm: %m Kv:Ŀ^][|*M5^-~h3 -t)-9P7e˟ڇ߰iԌ͕v)~o,8[k+$%UX :EdJaG 6%= [7$L$x Qւ#/ENhlG[F5ˏ^cWUٴP}==QVZ{l"I.59n hno(ͷЌ4qڲaSe{.%iya?琡}ec94; Tͼhd# f‡RmMFVhCl$7 ߰;F04bf t>E\ρ1UO|l oK$D@$JH{)%m1(mQ# 1$@124uS?xJs%T\?W+﯒6Dnabq6k'<#(9W% FȦ rz:uOw:ןS1Խ 57&1*XR6~?CvL%w c§")sTrWmQaxM*H/UVPLC!;U $| 6 VUtAS5)S(@jg 1A`w-Xs9M+}Ȝ(L,Mw̥,]UP6jg}?l^DI& 8B*-c RS\[9MY0y#%T{_h(jR6,h \gXldDM*הZ|<ڊ(=蝙^{zHMSt|)GazA FO72d.[G1X>q]g%ܐ3QԔqMb\kA0WrjekR8RpS!ؚW& pR[n$ c4i[ff_w0& ZP ࠸RdrE hik 1r\i W1FE`}#˖u]56-2"'ߐ6 ƃ LBD>>~k20e,r JҨl+ob_F0'Dq+Ӥe] '8\gA+H7 >{'N*n6uX]]; 1 fjǔJ RYnVv6QH8efX0Aʩ8fD,aR683HocЁ/`(d:Z8TQkj_Jx3 n#xVRڼ(#җwNe (P  ֽ _fE NIjkv؈KΜIXyҰE5I,ڕ`"q Vd1Cll7C nؕIr$ñ碂U7nNkHLZ"ت\i:?QiCr~?)B(7FL}jץ 1S75 c(]YIf}zZ$q-@FQ.\0 ~4ʣ稍( `)MHk}c@Ux1-Q_ԑЛ:* Ì5'T^\n%$e1Am]8J 3(>,V,Vխ3 Xo :j8ճ75uF_0fe`gP)+ [+_ۺ(Zڙ\Z JW3mN験+K#ZdL ULcD:)֘ O4 l$Q=Z~]m]VS$ŜɝEh-+ƭpzxie^hN0e[f/QtUku4Zրk譏m]ԠxB e'{߂wLi-4q!=dۢVd#V?V׭I4xĠZfD&=譓m]4Uzj:ZzԺqGOw]CZmLxiʠ52t:=K2 ʷ-RY3)Nҳm dE+c[+_ۺ~ZLZ2ig:e74Լ''g,Jmme-!@\ P3iLHH]ܶEjHsz)=t` UJg;v%!xrI3lzuB9YʊOC%8ѭz)k=P|nˢV(/6}S/9S/ ~Z*Q m!D(kJ*6JG ƍk-Ū^ܕe9Lj3cP0u>ژc E},L`o-Ǯn7 $@FB9Wd]ic} do[l,~B,MEJSԥ]vMSbJ/h~#L_??ih#<5GdL[9B1s kmsIEsmm`V}& _+?W3j}6lVg_\\8>W~:=@knȰ0P $ 1^?;^6oYV Oyԗ=EhNoPze]SZ4O릶*CA*6ѧz0anSb(F5OO ̝->|_&['72 iMwk-͡`&+'%[Xi$; ^"H/sH[NuY^sm0za@qݶEBH ) : i9<<1XU$F@ZĊRU`z"r&;)UM7 Z?ޠ7%ɚ ZL h+ x_>_byشl4i5e"ŋ~xlpQ)[XlZֳrB4ýLًެ۶SқMLE>5xRpwfː mN$9v{W=ݖO_ w` p?=׏F/`O;]k7H x&5~f/(%b$F,_WXc;?ʇ'UHFڀq)[u~_\ 2Ҹd~1 [|D={_;C?16 RtdT5θΎutDgA]?~jVͽߖjl|Sû&Ơ'xKN+C ]ymZgm:{9~`3(6g"JF b:,u'o$ dbeUe [+($׍ ѡ*~* & c4/hSMZ| dzV|x|ku$:qDTү>pNKvFvczN~Qʏ!:P=\vTA &5yz9 _;p~_5|I[&=W) ]z9;2#ܑ#1Xਗ7A&ice݈B-'|=t߯R(652h[R>x,Sfm#K1=p |dh&GVf>k@m>='ަz˽AlF4|eɰI[M|oID=aUV-$4Rk8*4WXd,JWWUx.wz VّڑݚO/s|{%o&f f'>>QT"wDi&1s::S^jU)3rYG8$y}4ّɔ24&z?Xiye=~,Ԁ`4tt=Cl*a I}aixNՇ??|c5V44EMlwH7'~)>k?QƧϛepyFSeSZ"-c~v`?h_<;` bDMA_ 5үgYoe[͈, gLCx/?Ԋ syw?Ns]socz\x `iΨ5/ '?'Is>0mQ>,*O=GI(4U*|ːěψ^KX:i9 b"$9/RH#~㽃adsIJU(S&}pEB?yZۯgn>WmSU:] z^Z?eޫls۪h^%J⬞C^oL0'둷\B޹X^c<ΓrZj۫_V.<*ɿM[\~}w{|_<̀k'5Un1T63Tܼ3n*N_N\\;Y&S|(C<^d[V>R}`onXylUdLϤ*iEۂJ^ػ]ԇT3}]U(f3B^zpxGkr3ګ4bFkphp4w=±{KHX%ve]pP*SU,sroE3XR:~ihDN~2K;e!^*6a 㭚OO1Ld;j,쾍:Q0OYk?yK\vRI&J_l{PP{ 3z/\f֭=h8f磻=~Qu ם2ZջS\ ?جFUz>h͸qCKՉvP>Jk"09qjr7T-tӼZmBwBg i8vA-w# (6*R0n f|fMSY9s(JΒO FRDl]oƲW80}? M)NN$} Ȓ,N\%e$kiiEjnl99Uy'NBM_[7:ФZM^nIfS^LHDh9aVa-gb썉66FaBZʇzk,:C) ;5ၣyuܨ2T6d5 &us>]; Ql1B%v+"T tR`ڵO1PM+<"r)ػKpfE f?D9B\nO9N#x+Rzyo9>8{hܤ`5}e S+͈\Sr+?Y NLS]MvfO Q~؊j$D^哊PC$焢IS*-$1ô&woVj]cBRʹh{ FvKSZl>:AsU9 J Ijm> m3 /7=ˊ}vСPKbBm-lZFH AVaþ#{šE5g0EMI߽簟 EZ|=7!T- (JHm.vIѤ蠨! (5)sT5,C.UckQ.+A}H |&HV8|b69+>Phz^AYk+Z*yQ.@X$,%dަ+iJ)5dϕO`=VIU^.h9QS}|џ_ E`n0ΉVdҋ`̧;.omUߖ% 5=qJ|Y ԙmc7[o>GDpպ,NdUe=tlݸ)Yн\cܨgf9> &,Tքt Fd<; .{Sn{B1k =O{װ9doXKBxn a)8UB(]֖)SqZc?]/"T7|z}?fWyNԔ 8>M3L] ϾSj2J2XuC6NDuᕩ+N-p::BgBm#ki^-+S m'~}ˊmZqI5/Mm'DhܭbeZ hr뀴FYĕB:Dr:/z?ь?evCUe} [OTřFn$݇q6!'ݤ<3Ϊ4pM `}4ƒpUK iUvgG8L]{$Uq%9W}z WBޅyWRU4OOnbfppM~5?O_z3׋f0.`d>|gxr>PK8|[ףvg;[FfxQx:MvP NB|!aF\pEG(Hjz<@wv>z8oˢudՀ> ,ЇQA]Y| reGG.)~nLۧj>kU*XWϯ^-%קf=|!J*^OmLUsF AEdLT]U構L0Ђ91׏VՐRbEE-{U *[o;DXHnR:qƐDޮbsNl Y Pt;Du9 4 #P?C pvqsƅM+۷)u0nǫ~+t+nv:b@-XފGrk  XwΝ(}D1^kwn I/ pP>D7U"`s/VԢwvU:MաE~ kΕ{? %c(yj(* !kkI jpK&%չ5`}cA H쭏d"KrHɢtMFk{ :> 0AK\Z}hZTxZ"dHM5-j6呦^nԫu9S ܛ%·C$7fLt5W!4EMٲ+GhTMnFvAZ׿^gAiJXi'Vw"Ǡ ⬪Jw "USbY(8GiuCcT56y4U(=Տ[SD;؍Iŏ_]Dd&$&kkmnvQH.s[pW#Xê^oŕu2{h_%LkJX.qݺay1aZK^,|(jً/L&=Ukro]L ̚XC X1nDsX$߯ƹuՙёre$׎¾(b:;Ʒ &M87n*\Qzn LG0ei M7Fs5 gذp9mr [) ]`UK${3;h #RU J퀟R)&!jTvJt*5=zt/B64aUᗤm4z//s1[,r{D\#eW2Wl>G}ͤ@ [7E+ZTi{~McW>sG*h}!*^ `*q5  rmE䒪֬ "UJ,",E$W:L||=@z"do8b cf0{ KǰH3JH Oℑ)G10mfV6=2j$ЦyXsvȿlX񰣍"q22A(pu32Xx4D93#)A.:LlKfcD Af֕ MRu5EJ}}ZR{cs}E VeژLhFnZ3 R72# zNW-T mC.Io;~:rwU$}giaF(mT`'l ee6%5;2Ըfa N-hԑs nq_姒3sTZҾl8h5I䂏|3g04/܉$>M<9d摹l.}7TbDN=jgqBpL4?, F+oșWtd |Ka`/S*mꀚɸR0 q3,{gU桹p' n50 e2r@ ^$ēx> |0_߰?y59g0Xyn2xr~ wA0Iྗljy0d<g?[^u7fdt3><'^A]g"Xf~2&sS^?޽ӏ _ͦ8tyI<7/81:w Htk:6 Nϯ`˛bi'M>OA{̉ۿO^O,Ww~O?O~'p8QFU#~'H PB]9S)u$K.wE҄po\C%޾|?ݹ,77w'ûO˫x^[ܾ/7ܚWc70\ޝOOܚv=LYA+nv2.WI}n]}G' ry|:sS#!xC `ِ 諥a=41ne~q/}NN@2g%7q78;9_ēe 3lLQt7 jT^e#'n_33*Ǔᩇ|#<;0A7ۙ&ɞj2p~?M'𻔓;̩ h_zE]䎓0ۿ@E?%(= K |\SrǷ?l >ܫ1C6Qb:_?at3oa^.?zIL6ͧ{ŵW3;5x5˿]Ir;>U.}xx˾_L<迒O?"P*U0})tU#30bejd@b5|"hԮT-m okNx6|Rt8-#5K=-sQ^ Hn*w{cEulFZ{:xA48 h  < añ4gD0o}#,Gq-Re> lΆܸ,Híe42w2'aڷ(&gw?ɔWu%]%*%ҾT~ȹ\x_s.\_9*݋>DQ2Gy:A;\"%%z@%a!rH)/\?w=/^\\:A6,S^i?މs/orO]4ȼo^t,r.?Շ)5έx:nH;n1${ N݋9у0/͟tA(I #i5?t<кapA/ >S`H"A$ <:HMNuS)#rt]ko\7+ } fG2OI6` Η, >ǚe%g6Xu[-%N ټaTk%$,<+W 19kpKY*ɩz5yH G_eAuv훺9E^ Äbk|K+VZhCD t`}Up<|aȹbdkr_{lGrψ5a3QԱ2WIN\U|L 5xcX|\c'&@w^Ϛvly42(߳Roj{n!P ?|Ő,Gj-% u>{>ǿ]^-~g-e=Φ RYrUl鏢Z1Xr:G;Ux=?Cqǐ/?A'zCt4Rќ_ͮ"!4O@_ 8Qr'QUwxL86GB6O`ۛ9BG射 GkGtQbd{3N?$E׫q"Pz[JPX J96{TVdžutMJ'[W>v_w^_s#;?&1\V/WߛFz 6ts1sCsnwLh6olq#x{q}z_s+(޾լ\Ʒ|={XbuUozoO^M<=f?ܡ,h+!xjA͔ )9C+ёޅy& 0X}t4"N&܁ ~6cc72|i%sl\)ck}wΏg7}z(:5 k*nxm&nn]:x=PlR0p~̶q*|Ltim\/K9i֫l%CiR]+{f9(jkKkI }R A猟Rb ]!شZ!O|kҦyj =mJQc-$Ĝpm+zZj&s4$`g!!~*#%و1A[s zZ,GLZUb߄W4 5f/79x{A l}MoՇ^we5m1!iGk3HfDcvmLIj{Lr,Cն[07]u.ȔTńiڷgXdjgo7D_}Mn[Bb8\2hx0é][sű ,G>JZOYa47i!1YVLn147"- x!q-1+SrsIKk5 xֵ8$*y 3ˊ6E0er=k n.\gg5*WUtma81f <}Rc1 1O+[Su058dVЛ0Į=Up T 8 ֽgsU`>Y'4ˁ{u$lHadNqPpUD;̼dY|6Je1Y&Hb X=͏87 =(j2Q 3&NygGE:$j(d8R1T$ $<M"D-K]ڹO3)ffʹCC{}o3?^~3˻eY 7_/{^>=fq诗>y6V>15q|cjyt,Z^%Zb(u0qN_vz\HjZi/),]bɺFD9ȻDk rm׼Rn rxMi /Y5U,.ZZɺ M6T%%!%z<~YMix㈇#yW˜ًw<8l> ʧ7!yxjo?w)]+J XM,ey` |gjE>7~okR nma_9ps4~.fJYxsK94;` {7ڶ y/]A 2#]a|w8n-u#`pȍV.YVZNZgI9So;yЅaCC][21ܓҐ,jbNhsyO5&S%,.s[S&oUaɶ5]H_x'=i5` mvָ$VlSD'S/LԲjD0Sɥ4 >4U&4E&YH*C!j YTX˘KjkuEJ̦4i`ܭ>/.!Ĕ|[6:F*>Z)E!kXZb!ZbXh[bn57jڮp1) 0%͙sTAZ-Jr5;xJF]B|x) BdX̪{ַ+DQl#Юʆ TShɃK#TZJ3dڦ`ɬ$bAbn ۴6 8KZy{6dKKǤw] `?h |\ٹM mAsaĸ2Zvk7ғz} IE4Jﷁ[}R*ZcLE t:#qI$3*%_(Lh橄CmK0XDJ@sـ k,fjmZOSC*̶=lu/l֙Ә-x&/f< \}aCljm6*40MlwnrBӉ VyvsqfRmo;jKau/fxZqu |>,K6s|l(wf̭e̼}n'r@ul8y2J 8BdyJaeWlVjVvJ ʳ )ieSenRmFXD!B8,] kLrmhPߣw6B}\17j^l<#q^Rm5"LB._Cʤ+Ws)%""%R-)"7Ҹ4drd@8p%m@emO5\L11da$dڲU/oU"u~BF3nPZd5SKV'[1f(*ǚ'!3/9ݽjxK$ԺױG vr\%[&I! /)$.57b „(*JdJk&Ai586'|=ŭj=Ӷ /j!k[kT?f HOHhwhݙlcr 7ٱ^$󶢌2kafz sy?ƯASA X9bXJiJ&LMHO8F:G/F{ĝ拴tl Sac 0~i}V(!ӛh !%D_zu56;xUx &'i}*nmrt;,xKsI&3p'}[>Dm:3u3f][oG+^3M"Y f1q%Q]U-)(5{f6-FL9u.U碇8IKzqƋ_KZ r1=1:!L!X]S(X)Jr-j'M5Sv! ZDhA=3aؓlB ;:&JY4y@VmU?n綫: <6Nhլ>ȢY}ΕD<;./kW -^at0 wl[+*ۇiޱ2a)J#wvE}|0V7 $dM*R+*+\GivQhJԱѽ! ^O_>L{?v P0ZƟrozW.S-L1"5H[jw.R}81O!YtqPL5C/./֭a "3 N1'P;m!FIAf2(˱2u;l R[w#؁ԭ3 w믏wX()cTj(TH,+PۇYRF#r+wQ'kٲ*q$xZPp8ɡnB@&#L)Xm)7>D=$x;ȟݝ\Ѭ%9& wPKyG&\˶cp~/ۇМOŸQl_E\SJFw-1/@591R:n}Oa^lj. Kδ]f[qYΰl#9Z(UTörco?HbՎg<(]c2xi|iO?wG;-BNw d;n,tcG'e,Œ7[߸oǽ$L I[95pW{g1+T|!Ar wET{go/?;{{eqߏoJ$}19;cָ9L\u>|OgBj9HʶI+I!S9t`JEy!c<?nx 7hT7YxP)ʹ$UiTaE]нҹұ/: "_bE=(^ZT3,蠌t!REz} subILw9.u`;Zԯέa/7TָNLy>z}}s"Da\rގXL!` L><1"OaٺD<.?U:v1|{\w[{+kpTsxQ'a\Z4N[~ݹ-L^Q& zƤ/KwΦO!Iu֠hV.4ҙwg k)#}ƚ,SRq7ˆ=* &VE$X($IЎg<=Ś׫$OTGThoAqO0a ;{n8Lc>odf7/W6QQ7sZS)nhzCm^w]OH| @? @oeQ-BEQgaRJ=Q8^\aY0LsHc CDzL=tjjf9/.éIwMXϚ>Aq|4W½i1|T=}^oK:v5ϛ4&Ŷg 'mߢ4n<!|3*JJi{ Q"gR뵗ps"%Br \qIj/3uj/bΦˈ?t,ARucj bzD<H_oZ?( D"b'G><ݹa6]i|qU' P ]?t֛ S>)|Eȅ0dEɊ^Ξ#[x]V5(k?+rf7.udJjڍr]1[M D3\i/7.2L)\*5b,/V/8DX˃ By G(/Y.TB[dQ7 m!GϹd`3S8 QLk3q%խ,=XcO_ݵ`kǁ q5 %}[^B.okjlK5ǫ]ǟyG!f<+QPGA1ʊ,H\ g’[, iBsD${L a ϕ ۺ%u~r&Pyh2`E=bD. dsH0THC#y 찔Xn s- X; 6 PCV d{{Zе[>/pЫk0jDi߽{NB+kOqS@ p!F<&2FcWh@@2+ 84,yU P53~] ۷jXKzt[ہnc-Ȩ_KiޚVL(V]Z7_;YG/fg z:6hha]LLk p<ǡxsB[sG'TrG@ObcdM. ´%e.1G1Irk% QθgBsmY!\ q { 6c5Falf 5a977Wb)OxT!DX}|E'~ՏGLDHh2xld:[2D+.[ )Νξ[O1\F Jܿ FZ Y蛮prSMAKPj{a?+/@#_/m_}@qx@Gic5m) $HFPFѲ} kzh(tw)#Ȓ Ќz?P5eJ֧9U9.^p1Ju]:ZG`RuG$ JG# ֵ7oE3^_c)7JGC ^7q0yzuyN'_j=..7˻rː_5tr_ A%=NC"!f,#YH$'8#A |^RF[P؈3j%=IK5 S*fZR5V}D=P޻tF:Ze¥`D$#5l[g7i̦>[ÅGPJxwBjPKɖPkD5Ts*EeRh1k %4R)7̻KGhs #B ĦCNcD9/n)Vq /]9)r=\3܂Ur͙ʘ`:Ag#, N ˄4>U^!r]9 썑sьٰo_WiHՆa梀ڧjCWUS20 P[':)Q/cXsMQ-5kи~M>JZgCwџU3rtuyYPinHK)x><!\WE} Z!*)c %jh4MǕΝ:ڔuqF]:OR;huF.o6UY-u)LK˅%W/l`=# e (~|-O9ۂ 7=+PEZN*uVhL*o4W*/l扦3 3paUYQIL`]N/(vn%՛6C;q*_lQQ٨;Mf+$E5?kx>(N:j}VU\ѬVgyE{Ҥ LHDphCy7Fg{BY"DuG,#gO@n̘>b &pxr \0sG 9w^Z'm SF I|gIkLy8э)W/E:>lQ6\Tcx*K)YÍ)$ԺgLZpVy'Cvj;74_|p:ti& c^=Jn{{4== 1b=ݷMݣK@Eqic;*r,ɱ}+|; ǻp; ǻny,cR;dvm#I; SC,.d2e7Fa&0xnAa$k!A"fuUuuUw=IKHPxIP^sck D1fq庲M?ԨlӏߛCk7n`7+`R-6nbQr=<8)օ֣A)minUʭ8Bg"k7~4rLtYI{Lt*`y啙?B%n ?Q+͆0; ƣ \OuſvMzۆJ;p:R^ wAgldo3~\Ya!]6Wf'27IcGsʰIHhLi 9 WܚgXx]tT@oLܛITxdO }'XQZGNE8YSc76Q㌦ K>+p9 .velS v;%>ްU86.HV& I|x ԷӚ[c'V; 7-^P@ٍW~RXA Pt4#XK&$3%8vD03F,p!HשIms%uJiQZkUGjEj]%R)ea^=rW!>]x֛ XscU9lGU I\z0 :F£>fk a(%REKL~?)v禩YƩ< WgOҹru5k`qa{5 c;LF-xZ3:܈ /hٜC0 [bD}yt\'CG4x mMq'.wTFC B|%”Q/;jq/@ =ߒ0s|^s Ag%*1"tPyס0w˳s4۾ ? G/b)il!vT.0uE0aVZi ռMo`ZML``^tص/dtEE8Xpp&I-e)6Ij K!\M]x5WmKM$7I P D J E 皤$v w#2$1AJDI1Eڤ ;CEH r_{JmAo|2 ks z|v!`AՌ!jGo N[~gm;ީ!$ߪ#`EaX̀5r$dYYտx M4Vn ^!di(d2誔xEV'bBF YHE (dA֗8H: kU|nVF⠩sA7HXv'[ͬ\qQX!Z%$- gF-Ǒk~%HB'; -O/  |PT1ͩ:ш *Q"$ĂNJT!}UkS? օZ%bl1t"1^.Fc^=\%IG3~).[OHz Ol{uYID=֮P[e^2!8ZoJ))цKɮmh~u]|m?(5)XЃ{O,TV"һ :?.v5"|;)f- TjˬЂQ@mD&l+&J鮷탙 ݚ(Ks@̃&_]  =ڥVEhS}Dufd=d=d=d=+\'Y(\X+jMJ Ⱥ$sAhGcguЛBHSSNT52`x .w[cUzTNâpN./v l{|f^^%|Y#"X#=[፴{cmŨ@l+`OzJj% hS{ȗpQ1d!#GLH zH<]Dd9vėf3$_;#b9@/V%9iO 5R;Uk$PRb4J+leR1y3J旽;Z!O7dXޅ[ +g4+\bEJ`e(7DEDDD[gn4>,Q&ڧFJDi8MENKF[49dޘ{Z0'[lAS\n&{6[1 QmI-97)GV+͙lS9Z(xWUVJ(*Tc$.oGUFt[b x6ɇA'.4nS$3%e0{֞kh[x^.7pmW`/ A7a-wow:vE<N EE68㫅nGم/bE'~ǨBTXhaAp.N+,T#B X. 5@DjT>s5"¶P_ṳp;cǯ}evw~Z[M/NF+}|-rZn .Wq׊.8鹨dLݼ8vQ0H8Xi :v4'psj ǹ`gJyArٴFϊl~4_5)uOeN[U6r0;t|EG#AyͰ44}w9 . qv䠎ٯIhȯw{`k3tZVDk{Hz ާwߏ@*#ߐZ4o;+C~CxCx㇈PrnӇ^nߣxp?*'VoP.._d àwgwڜt6sIL[gQ|O:o#LwD^yyupN)Il2,K^~ AU ^ ٤nN*ҿTs_|`4FCv0,$~e7soe3y71Sd++hЕc~< TߟΔu)ϧoݸ˰ws] #n\Us[H'sKbǿr_(Wa|w^^|ngք z\k8 T|bس|S%?Aۅ?^2ނF+aL ץ7(`x,˪ýYu:uU]!˝A~UX<n\GKt0{:| yu/A;[̤7ﻃɷp@n} ^|\\s8882'/k/{'&"z{ (ldWk^k7z9@+~i ,;`N?/fYW^1~{=Fy~x+k* n %Є +TQ^2n Kģw.:uvoJ:1V2ӈ4fMdD*bj1xͧiם `XXs{VLLQhK>0cv0~GZג_õ<%TfJT5V#ʻ"= uYWk9Q7|1\o~fˌ|c2dDqЩhL{@$1Wb]-ݠuZԦr41rv4Fu _rv`TKp+OLT!M JilFҘ0 XΗ]jTVƞM=acX*ʹz:`jNM;m !|%N)8Æ58!W#Ȇq4=͉/| 媾f4)08wղ;![c;­>ի_>9,y=xo½?8~h C`[+^-ejPPcA01)/ӻ5Y7IF ` )AD"p ?,iJ˰su$UJÌ'MhI(Ѣb-JKȎV]p v TDjex Wi!al6j)V 6i'o6r ®DTF#3ʔx@:>df9 ԛUvVƦ+lLaP˭ !?):3-7"#^9A_ c+M1M /GFD1bA,'6&"RcQMp5-&ΦAg~a`ƙ1My>hFukq6i@ƧfR#͟!o^ϞaerxH1G)%@i0S+E[b= =j|4ys^2$@21%!PNQk8E`}O*V'TbqGE~;w] SJ$™V٥mKʭe7fƹϩrٵq,| 0ɧ%4$lN_lgֻA:LCSG7_;~?j t&/ϼ<˳,çAX1 S4ׂXPœY0rƚc":w, 7XaD]ڐ3; s?ȸp58MΔZoo%3|i)^e)*5%<)\4J)4!3j)=HhA1j+ *)%K' ms"(P)-. )"9bO`a \C*کogJ:BK껌bbYxY|BV,۲8"Vx2 ݿc ccc) hrԥ[.ZQhU5)0pZ4Bo΍Ϧnefv4bU:m5UZYY猑aBlL] %p&عb)窷P94ιC )\lM%J9dǑAv1&[$=}f!$+޿QH"?\LS5ke$|xs"Y';)۩r$vji^́[OYB]B)Z1cQRJ{ ⮊r; bӴ.b +@)gk4-~ .286e>!$-GZ4+wԧz,y7dyf-v]1kݟ&Y} -o{BaF X;`RB88V_GHx>GGoV" <e vvQ끃khY$"\ ק75}w_>v0ہQ:2! I$j)w2Np%:C4Tb"mav X3F8hΥY~ѿeU2i լԓ >|/6F4j ,Bw.duHiʹ$]/,#{K;$l6 "%O_E #ޗoǵ\|Vmɺ[ύS0bo;W/?eF ֬6카şͥڗ~q Ip-[3$T+c&H1@ >GG实1#&>RXYlG8n#$ UλMٞX|2j%̿Cmu*Bb+?tx]p4hQI],Ak^qK1n*<4ѾVn,׸H:Fi۴d}ލNš `j1S)x =2H.5lwm Qb֫G(UQf'IM-*qFp{^ek$;wce alNik3IP? {(2:0oH XqZ3G-ɀ yGiPi{sSOB4K:7rc1VɳmЫfG5_IsU-ٞ8F{ÀNs~90ϜOW}<ѴuM5Ֆu*v]V*kK H(tqmP)mʄYJGnj̒.HӞO[ٓY%1yC&'+Љ*=މ%1$'rQ' egtwڵ;׍q<w\Gum 8 b~?t.z$^|AZ^w/}|u_ }w}ދ3!FR5{-R(3ԯȶ#85_oDy=/N@ U}pM[8bȵf{luEG߂I{{щy}uR;`ӽ=AZG`hͤc*[7+n$GOܿ}CaJlf/;=*f\۶7_Qhs(Fb:n^yGC{*lJ4:B3 f1Mx6^9b/jjGls- jcnj_+NZc!6![sCBbqij}/PWU`q0T=|6ܡp/5 j,HXQPX>b\N;ӣs4qG!9( G6O m5):ztcj4AQv쏢 VStZiOQT)TjS)W{N\ m=y2FϨ4>B$+fDiJh!pvCE,Lݜ)&˗(x4yuJ-D{5VFLyuװ'N,<FB^YK2uB\0|ڦjHIet;cTIu"2=.Us.,Nkۻc԰3jِѨJ;hPHƇd`A!-4jȀHPT@SIs˩C.# 4Wm*ʥRŒf.hpJ' DBd YƜ@|:iߔ ~Kz[j;-8vt_ BTC$AA+"!R+stpER1lIZB<+mh=1a,lc*v1Ng:0L޻>4Za9u^]Yt2X K ;CX2 ;%3@06qCƉ ]D߉a]Weqy{Jrh'ީ<yth<5A䥺Z+9B6r@}p`M@t >&øeFR!m\po9pY+r2*O"M^h{)Hޣ7)Q2mFx.J|  i" (Rd7KΫ[^du9V4z㒽eY1d^Boo׸ӗxٿZq Rz?yqMOE OZjqJ?{ȍe/< f6/ H6FD',߷ؒԲW .~_X,V+9t]j+^םĸ8}s`?\drV҆Xa$5 yM60yd Um0bƽU䌶׫a-HX(Z +5_2QLͦޭBTFB$zzu?}w3Olx?z!E.H}q"oy9]_bJ@o [ ʪ(FlbW|P*WՀJg9B; 9üNJ۵Tr58mu7ᾫYTP*c \,sՕժ[[ݹX[vB6<;6< { ('c|@@&w/._˃-!̒SwE."C2tRaMED"mYLO]磾sir{\Z*jbhS/M-yi 0=&M'3M@e/Ȓ< z(q[,E1DxǗDY憾pG[^Q8r%Z?LkXaj3/>P{)$X! b/On?v(`XJ]>3ta8qqGKX9UqUJ]1X0>=/hȯ-`>?>fgy:*Dyà pU{A(^kEg -+yH]5eFήc)tJ؟yQ51BX J~XLVj%F2Dbo aSgEYFՊ[քDXk2wF%I Mr1l `ʪ{|K5f32G-X3 +*O38T{P#+X_\I!3j ֞p[G xPcA wLJF$ՠ1qT)aJSpT9D͚((wN5]qTHjU}7řX·X4$eF;8|Z &+Fj0`:` e2&o m"`ć߯X aMzv{g*ӈ; TAXS)G/щ@pPKH"4:F k[,bpl۩iXܖLq"RR 'aȉP-:N4%XR#۪ H4goABR &)yifrOSDN e0,2gZ.DUoPO5(\I#Ŋ.#Jzk%wiOM !M]#ޙfC{Ws0nO5eFI.cgr$;B*[L#< Tp.cb,FBsՎ~ emr׼0@W{1 3*p+X2dX 2+qr]ROu .6ep_ ^PX~*@M3m_fK#6Sq9k*_'*;bVϨ`D@ћg&`B:@{3r)8-$ tphT`ܖ^t7'7^a@+$uܸwM:@w*BP^XX6^u*AP͈4΀9SXn !.sXj2 1VllX]&mR[c$>]_(Ztu2DPZ3W!wSo*FՃP٦r:W_2**}zw=U>,` _} O#x~hKYٲìۂl_]SUwZ}Siv.s`Q\BhIJ t. gϢӵn*:0*W:.e+\= J|D]q[8V8'ӰOh*(ɴ6lpӽr1:t(BIz}smg%" g+"cd~kN距zJBDڍ{נsDkm}YFU@DBy FiREHʔN9S8X,WzZLf *55& ^joDHc^foeQu:q;o=N:јqM9h!b_au\]V$H8)cy%Pl9en} ˉKCQ;M6XV8cc5Xu;"-wM\,[mҨe_j MfSd,Ij3nc0jfEK0Mr (`* Yӓ^ϟ y|^<=҉iј`9Fcg@^UysUmM )..aMxB1(kqTT](-$ܖ͓W4HfL-K ̹ {&m@@Ft %i +~o6]5y+'y%f61v \;2RXWC~9/-ʭx~5 V K.e[v*Qe9]c4Q i5Y2K;xڱ>fӎy]ݨ$둰-Q\޿z4qL+3'u9!%ٞ [cv!xyr,XsRIer,@S& |V ='Lª!w?8w.w\߫ѷO>D[QAU%btuyIz1RR;o]\6'^,yI>Y>kbyD2$%KMy602dT\jci…8pܑ36G A .4C1Ř F$ m^r뮰# @RgY SnxrQvf["#J׆TfV` c#2)/~](7S˱8nGN`5\n7 l 9{w\SJ:Yq;[:L3e%2-r {X9!5b;\Dns5ƚB&*4PvPVJG L66@MT r3Kf_a;IjaB C[ZKNXg,ќ#RF3&VaZ0HRg3g:,O-m8x`úcyJqsB| 8&@׋aq1,N>d$#d9 zJ33N…D3i&%PJJ?C>_?F7cn鋮Fy~o< >>gZ-5I*3\,Rx,,ǘ t{p7gڒ[# ɺc]9!fHUf̩U:C)ULzAeoj"\hb)!-gi#tT7BPpǛѱDufcJp1dS8traM0m8ϴCm iV[L0ރ'eZ)# Q:mLbY'Lwm=ry9P$J))@]ɯsv.maÞvuH"J \-Hj׷&0cW &MD$,sCֶNOP``"uë nlxglͧ>)v lԏOv}8ވ{4_cdY\>RAM*w%r4vL?pY#81&6Wq8Hx1=c<%ƽ-  Dm&>O~flabs`MޡWck.mAGCK;MmsT![ȝڻ&eQT: "Z SYْZ#i|3= xMsp̙|M&`!ю]+S=[wF۹Y~Z{;YhJ/~n'WA৊c5N?Ț.q'lOx7vP=hy[ 5XJ%_noŭ%[=6XH< bVݤ]V8aβ{:gN/fa a# _M~|{{ khYԡ;gAǾlg0KS$iF4bqSjo0H 1jI5-0.Z~Q0rnln_GcZ|כ/kA Hk6Vr~^J=n =nHgou,qoȵrTv&/wBnW>'vLo<.=N<㩽CEzͮ1D[sRU4;jBb o\8 :uw:a0C)} %ܰ}";>] ]wׅBBFG|-+=DGDI(9 BHQ9#:NhRGѩ,ӖG(NN'pl_c 'ƞ ]v.\wم~vƲS+P|mٻnQP"R:f8վƦ4iM|PxF٬beg\s; Wm|}kM5XSXXZlґa36U~sJ)myo=7d.ƭ%LRB3j4 X)!bXKdmUbDܬ>Sܦ*)W8c6r-%ZkZb`թr S'X;HNdK ILF9""@cc1[c5>0!Th@UQvc ξVm [40탎EE_Tr,*TbѢ(i1@vݨ K A_nsܷ'QȜWF\^`z)).VI ۙ@kӽŤ#ɒPlfZCY@j2X,7 4Ѿ eKA&RN3S fyAkas8оj4⧥Ns;,9M=4JݒF`g pS?=F꠪5vqhZ)KWr$ NIzzǑ$^C<F`F 3/jopAo{{P+G zovo|(7F>< K,3u$\d<`?wYnY!V뚎@250F+c,9EXe|j8on#-QX-7Mg%PaƽjVM1)7UHNѶO:2o\ĂJ;#ajˤ*Dօe&m d<'G5şۂ49jE<9䳍߷Ei}v5#JMɨW:ijnQܽRAs;GZ(nLnOܖxͩ1m\.ZK;c qu$톪a*>`}ZrwOC+)JcϡL,DK)vF-1"~#-8Vߩ cYΰ 0x}v"LqXi€|~à3;8KXk%U7j&mG@wPrN"->fVٔ5oHf$;UeGWͧ\4&0F6N1.J77WכOvx. ,0o\D191kH2w-r0P!\ (\ʽp:{ ןkÕVuٯ~b>gp7kc3\QdLހ^QUY#UYf-0+^i䁹$jeu2uudޭ]vHQbTß7tr[T?A7Gf*?]h7Nuel8^g𯌷pPm:ſh98z?O]x*|e /%4 x2xGpGL|6OӬrtvfM)F1䬇h 3Nٓ :F <]onc ɝ]'&EZZ!CH>UKF8DEs@cBNؚ*~|*O庳.םuuߺjJРAq!T # /N88FTL9#gNhR۩(kWaG1Rivij' 3cw ]іkc<-蹡m: rGs|3)}; ?Z{Sɞ Md7'Dĝ16ǿőo˧Jz`>>X嶇ݿ޾&aw/o{ ^o),ٵ GX[GkzEL{Y8O/GQ_s}xz9LEvç6|spwDۧ.Toɐ5\^ʹY5L\^\Lt#!fHkniNzaܺ;k{>*hϗ3-yehY#bb`okBet ̦|5NZ<8]x[p l㌭!vgk\'U5yO41%Qe2bE;,3@DBْL "93YK9dMŀGdɹ0 89Vy8vM|u| Mg?wo/%MuDM\YPQZ/Y|^mD U 6xD_W!7p%T$> K]W.6F9G&%QCȡ&MEFbD &4%v4뱮\b fVT9pE#qN2R)S;; ΢7YPSĩu3Ŝh*8Pg[RþjcLmثB.MW?ʡ3}Wʣ|!F$'Q^`|P1 !`֡X@,\d9/.ɉW+ zLv8Qh ڕ~~[kNf( ڑ&_pxM9`.tӔ\/vdo%!9xyIH W ^fg|B!]YsG+zfPDZJaY*1AJ jDw#a]YߗUYXhq}p@JLS)gx4a{/,~L6+$~OƃV1`ĊfZ f~T g~zc&͋s!bL"_éqCHM0+ҐSzk ٢{޽ cE{-@ (QAV~5XE$pDU#c" 8Z&Klj5Ҧ<0ƉSgSCwC]J0^آ`cNP-V 1;,Ë,ew=h+4̑"5ELJIVCm& \c,@B!"jrpXb`tK(Ǭ3p?y0Jc@K',>e\ൠĦ#u2VKб ŰadZ& T_iGH]_tw zNU.$7x!D=smU.̖Ť{6$j+ ӎX9mV#< AAq\ oB:$-/PTno|ikTw% 旑Z h~˰n1;nO/Alo#Ff 힑k<!"xܮ!q90΋&~B./}]Ag8-Tw37. RşG[)g@`Dd@sQ~t,w7nwO~V|~;1ritl,K?^M'0cmx_.b&]ycjaNdfd6Ǘ7/x$Iom(4s'Hehµъ`˽+7 wKWr/M_ο%n2;/C?M}M){6_ {};)(ACe~?mZe'[2CgDmǰ:jzS˯_+q)hKaz,՞Oo/kc^)?j^W% pv<@i9L h+6t*+?qk.WB.bhh 5wZyKkyc,yu,.}F,FKu0JbgfttsC0*_ooeZmzx=, T.;2a@A[Y,-!ScUbMUyL*rhm)m,b =6Bkt큲^fvb~H*Xc+GzgvϋDiܸAG=UX UE:~/*-z5ƽk9K{䳐X¡1!`AY㝣BPg=Sq 9҄HmȦR4o8TQ='6&>Eňe@ bSo?t,\@OoM:Nr>+!O?tGk- 3ŕӓ2W&cI4 kt¸%"V-"YQN5>htR_b4&I00n'>XPRFpFiW Zx456%|1MׯkSx)NMk0q,d$7X1h4VPsuk&KSDÛQ A—"WC3 Wd,MjU$ 栙ǛE ,%-`Q___|.!/O|R7oHJ7ߍ;}c[8'uO^<PrV:,3?y^,Z@Nd(rZ!xirQRK Dh58 l5͆(d=xN(T*kr7jUcLdl;y _whUUsV[itflc+rhW9,43ՒM OHk-vP#踢L'{2 c.P^] dH|@MC+ggk2w7)ס.ߎu! Ir[~cn^@Xa ̭͓ZFRkֳؠ?ka;i&FdnnuVwj]Q0yh>F7W#ڳƘl3޵W4`h812{+1%u)2nc^dy~3@W&MBV@"H0%ֲYA%da) rs-o"b cQht˧IymГh>k혺zD+r'toɵcڕ/F{^ :z9Ѹ~)\_}ܲ/cmxцF6/t`e>x-̓ vg&V Yaަ/5O­(=%c~ۇc&LHfş.`0iū^2_'Wڽ}yF "cw ч__d͗sʻtriwkIa/^ cOMb}~6le9  Xn}MO,iY5+k`XcE6Jc<}1s{KRשZ!ո6ήLG'0Z:81&&pV:I}w&f>#˹._q>~FE,t8y%?G-hM 3.oM™25 (,q~S-7 = Z;Rghu7G&/{5m(FR0CSk_y/4e5n|M?dI5cx!Ô(/%[,]i#dK2JD"źEZ+Լ9k8^yȡ{P̦иDcv S0AdGyѻ&%3sgnao͒}#-Oa2LE$E'i [dwt2MlD}"1 Q$עcgc GNW"+뗪$kb|ķ;d]'' {)/X-m3M`Jnz2*ʴyjyȾC P";P;q5d0aA5v3 sH#7r\hXnW'9xhUHX\rg$$1F&12I1Ff-NNkl 掑T0e3P)Eir)BAx$=_hTH[8tgZhi6)ݳX1LMT|bz ^$Ò@b T# pT^a# $f!ń] oƶ+w[N঍'xhR"K$;Nwdj"Q5~gQ!xRRA'Mb8̗]7!☳1\~24҅$'ÞE)%0mM|C1XrdGBAqA!1؄ae2b+^xizN%4!J 6G4[#PZcPt.RZK њ<Қ .ɐrDP%#1# OHi‡¦N`񙝥1Pcq&kݽNbS ,'sld%"kZEq|hjOǝ~/LrӃAe0lactr_O/r@Gy48iمeҿpr8:lM8Km<=B xw_sTKc᭻|z 1=ЇNӷQĽ8xd{H 7^@ 4"r1W Y-zz25)-yק\tqQp$3 m׷!V[fϰV]K 0s"{\ W>(bot &WM.`bMG ?y>rGЂ7# bi4 _Zn8hi?G  SGeZHY ?L;zgƋUWipv7ӷ5Qͳm cB^BL:y<1m]%ٿ ߒޙoaӃ770`Mߺ_O.<~RDb 2'ŏv_9JOi0> vZݞH)i=xLx ߙIG),seWV,M隶ADڧ]0ȞwL劈 vAzOr[tcG<1ߎP"s0UTa׳R`J7oG?w#^wZÖ߾0m G=sv8.ai6}6f5`Yo/cLI@yڕ~JJuU!xӊM5l]M:8#sw2| fn~mP4cu Xa43Ǖ;@i~z8@VFu7o2/Axd|޿YjWlIGl{V_wc2^VY W*=/^BΪc=0^LK |u/,Ma$zK ћ.tjWM)zM?}j÷Җo;ԓqmS^Z\F+Zzȣ~~'Vt$0?'dDhIvɋY7J~?xʤq"Eq0ߋ 3G.UW$F.kHʉIͿLsIfA:OLƓ|?5p9ג L?,caSpi^ۀ .6SJC[z?~ҏ_K?f{ Ō+ d$D~pPB}M3c"}b|SvT(hkbE$'|aE: B P@XAd+ȗBxd#L|drzLi7bŅ1>r6[rsn<>eYc]9NOnkITݹy\x7Gx;5?;8SʳX uxY6Mxue-֍"tBzY6LN w NF{o#Rj$g)`-\1k~6םk篻4v~QVXʫ`,V G:x>V#ATQ/ #J]%/o3h9(cydcƉSĂ 0,ȞeKRO9#Sld8 ?=,6d&ih :`z% gZ:NԆ? #/oJ &=,& QI߻c2^F7{48Ɂ깤[j+I2y# iOH`B X)* iy6FHk&a3jh#LsFᲃٗ'/Od:sTvh=#f+=-`i^.8Rl'[>kt7RjUΩ@.5'MˇM.dQszܔ]0G1Q0)5xM9lidU"Atk7vYJ7N5\pk>kEqGc32+v/*oHKQ[׀4mwOK^ahʝmGS7z?uj㽀)=A/\o()\/GAdND$\'b%uKPƐ⤄`|Ԝ ?BVJp4.xߌ[-*P>Gw@ܧS)DR8 })CDBm8",TNeIY%Ó@)D ں7R P)H:Pt> 6J?gy/ -vXz !;4 |D*Uy>}>* 㒅:;-n%y׶A/);;H3. 089)LZZ@mXj}1ȯCH6-z~(Qۆ[ΆI^D v9g~Dc <)B+̴7n8\:HvՋ H}Kcw816䑻ˁUyNr' v/s}|#<׉;K]qRr1 v\a^abR1aVULVQqYue9ӭ5+M]Fl)6 V%-<|Zdl8A%a@ӢH ۔iTc'Xd IN˳Tb1ťmnJ_?"6c3!/<GK9і\pK`b9V  ߽~6'}S4!3D#Y [l`ߛg-qUVC.5XeSboeYlrrbd1#KoZ>i~e NUp%^YJ$% ?gi 'l«#^QˌxTKVђ[}Mdo^.0uY^lš:OvgsQI']aH~X4٤3o~`DLAgx:P0alnl$"nF}* t'nf`rklOߏ/NNM᯶, {R>Vz-y=b W!&iE46_yqY}܋?j0-;.<-\x }$'/I@M~e,\?2{NI<, L -N`KOw[𳈩U0-eLruL9\՘dܽDi VtS{':R1BzBc^Icƾ3$| JrwQ2o+4{LP<4RzR 4~)-u;N#2 ;YN^sY;Uw%[}kez[>p_f^7yϟwȉK!dSE* 9Bq ;>zDc%PmO".%V:BU୐`踗qd{XYy t޶K9VrXNTt@J )‰mHw["ݹ8%$9sЧtJN?0>搩*.*{Jk"i̟lQ /(0Vs2b-{Aڶuf]ٷqrzNaI;;/(1lL{^Tqb͢._VO3=qG1;XHP/ ^S1I9B3[ w'̇|RӺ%vܸ|!0&sע/ͷH ȇEoz_ٰN˒>vp݄#n4.8I7_-s%$>{[&Z)T b:.$TKY [N6-̔(&k!,tyҜ#:N\H"3i/IQC!Ϟa_}sPe{q:_n\gj.@6d3h  |Ї=:hQ /Na)|q{#$/ h(?+0Bi0XHNPڦqPBM,x(cJ5Kp.Y 4Xk8h]/'03ں1g/..ksАH suG:3uҔhPxU'J3LYdQxGlX*iLP3IҤd4h0 j1@)VGn5xh^WвD^JUa傉GRCҝ)]|&AJ¬Kh896rtk!` o\?nJ3;rZ 5nZ;JǐϓTS9y$&S*OPt .{̕ ?6wF,Ɉ%~!-FJe<PC2'f'HXX (J@] '9xw{d˷KVq ,X+ޙhFV!a7:.@Sc:GGjSѹx5rn[˙0mαW݈Vs_=O8LR0rP=}&9Z=9$W0ʈp9VTo%I:{r|m.gi-hpZ30 /KPn# Rtn.Dy~ݣ;?f5m)M=]K;1 .ܼ-('ǐ <)E˨ynpEcyOl_8QK%+LJ(ŔuWQDB̊59ײe@bZƌx&3`2gh`0- V$Po]  Jb:6RbV_; K4#p@"Dz SVql \W8+8'?=EH(4RݗAz$SfVn}RI RKr%pf餲+ Z(U#͑jc-;Tأ{0 nPd ]$bSr*,2i0$˟D: "KEps)4^J֎\Γ7lTۆoyt;Ւ.\.Ov5+91ĝN@5apΎ 7KkD+Al_o#/$9%:j%+~?bm,(#g{*Ԏ#|V]֣LA.-8j;l!PZWV<]/le|۠`1j&V  N;GŨ.}{hDVM9GkRj0A/I{s,Qk-X[mLUn9Omx7DE|%P,j r GT%A ~u%ׄ& 5nM+ize8Y 1I1^*u{B@ҥ%95UFo:?11d(Fۻ $𶖧Z#fZ i*?:z_{Ł%޾ l(#ѳ\>"3s_?2}ۥ?L 7x2n9vQ_!\o1dZNtS]nDȃBAδ;j<{F"Jpyj-D}et¹lk[Rw|~_ϖD;wJʥ5*#ԙX! ]J&>}N9@Jם{*>;q-j8a - +h=#4|6dwÚU{O6fRAW%CὭ?F6[%5% ,CSCPxoPѨ#VY3sq4 2^Il%X+.-'HkCIS07$ASU`uH|H|0![% $J3jE9 lO>Ia?P[TzUz4Qmzqÿ'kWU*/ITIYc#x1s77nU M5FH|uPdqQp<?9ݳ\+7[8g|vB6[qUaM}&pe\;nT56;6 ž&<]i! A)2(pBYӢsZ^U۵Qy lTRBd u hir”0ۄ]O~qF̵I@'"+W)lv8&H mnUlO$m=X'!?`KebS`CM1ef쨘L[;Ж:2 Kbm.0B*Nvm>(E@6Lu3G6Rhß |@ e߆f~]~C56j/K~C wwΌ5#`sŷ.] 2ei# 2/@>SxT'rӻ d \$4%5{F<Ir"ıHy$y^xv M°p;:i#d"sDČU_އ~9 @M,w88X*b848x~9  ֕㤐14g7ꋗַ_`%Xj(WRCf>MI /9r%Bu+‘Iќ~,߯~񻏛ݗE"H(3qʛ 22E<*H$ԙLf9,)A]F" bTR͸x\+V%&IMW\c` DyY|b rUCՐ3 ۲Cƚs|+hOۓSoT׈;@}g:wv|1d/Nڡ⒛VO9v7֗\۠2N݌D=hX/׋lEQc8kKY m^qN̓9l"ݖ  򠕺S"@ˣW~w}2GkOl7rMpUd0<og? MpV쬆8-h|7arM iH⇩?\X>㻛*SgS]@c+G_D9M4]ӻeN`|*⥱\%uRxHT{ Z:zmMT 8>PvZP&L @}CHCH2yJPOի&:Aki#qE-e@re䵌3y ,¥3D]m(طLIBxo 0 @rx#k#woZ`eeUԷo`M+4+5ȅ:.Ng|V1b1ot`ػ{;[mKwX%ryHeexFgOJg=Lf"G@hX*fDXnLp PHwY݄$=p*r+cɉݐ+#bg@!()Z5e.0!(CM@dI9"O{d5$AcKUB_ReRrY ]P(΃1R _WH(g3#AKSmB3& t7Q*\8yp1Bͯ?3$s2j|Ҹ `01Fb9Puz*@Yr&Q\D*\` gR *`9+]&tBJ6Fe) !x]*˘x σ^Qk ^בH! Q" /p998 (Xo&0fO'P|~' PO|#+~z6׷/a2#,{7z?E/F~ /SM~y7]GWfӏ!>6"Pr>t(UF /_PH:9Zcҹ[x-1kHƖm Cb=3t;Hic WH:n d8CĸOi @R8v XqDkMa TfE)>J!Q RQ E] pVOO KRTwj%|ve;}s a#΀]gZ?3ORvE`_9;lL BӦ]eӏ7|Pǧa#L嵟!*'7 PJ!xq]?/\G/N:T.7M!~L>eɗB\x4_1C)Ɉ)[K,z l 1㱷o'_u}UrSY,߽X $껧=m6Ini_@\U!j 2cN6N6 t9fm( q9^xgg|z|!Xn}i.?s0KMt%/(}a/8)3Q>7B;-.6q+0…`#cCRf3N#LdpU۠j/ EGJBa_&RDs Q"0HdiEqD ZGi !,G(F1jdGՆ*||`CYvJ{YkucMBA2?>b%q06M)iI,HXtPqL;1q,L!U4>6^||ƥK GPMǛ=p>< "fU7+bA R!4n5n5n5n׸6STQpcfB B B B P}E^ >\|33@&BQbB_WFO G%F[ű`:a,B#BlK0#8dB/` aXQU(ހ8;Ma01o# g(A„u.qLBE(yd}:NGǽNiQ~ +ah:"m7h_F:17FTPŌp10@kdJcbǷg!rXLͽ |1z'`.Z|M$uZ`|ؘY PĠREaUÂlx "É DLM# RZa3c P3gա *e\cQ@KAi(~|VKas7(3cadtw,bq@uڡ8l5’*Q*-h<%Jߌ،r'7Dh#f8>R@"/5S>2SݒVUT3 ;@e=KO)B}* [>G!p%ϥ8Zc}Bw#*. S AHQH a:CI*nI5FZ[ٚekGZ-Rhٚ%)Q"KP2xe3`Սy%S]KNA^O\ \IVHf&f&й5dSy&5y^r̉/KXǖ[ TeU[T.qIpX3+CfKR`]z\)ZBEUv#l/\[\rXvْ嶥*Y98Ռm'<&9Rdiʐ!+0r$WStad*`NB!*cyq̮zq300ߒɁ9[byy2Z>;ZBӍA1\9wvЂQtLq*ܬ@yÂ*-JX%UsK0Cr8I6P_w2Gϭb,!3lVˁ+g ReH$Yqr S%3Eg*?.xo|xjxLxлj"DL(m4 'inM) -"=fIi<sQ L,JJ/Uۯ_+ѧh..zf~1ٜ` !˕i53` <0 )E"8'4΅y 1'h=4YZa7 (N fZL A 5p&#/KS23~Aax:Mff_.4/O5Bp\W M9ضe'.ąӳm 34($ (POB,yrl3$![HV-NnrJ#*ja*’ml/ZȺ:& "ƻn䱷o҈PJin_8ۂR~~l#Rl|)ddݭ:Wfs Q"j4[v 4IcĆ6. \qՆוXf5mEp-hk*FkL:ڕEo5S ,ϮpurG^黰v +TCu7(Zy].A[[LТn3a֛ 5)E`nriǖעHM,%aT2Gonx Ӧޅo~aX]a"aǼUU~]zMb+d]߱uJre_Rsum9.PVf= =Wo_RuI -/j卓53d㧘*/jwptAZ$iRa*S ::M:cHglK1zHXPX@VDVB48 ,qH*_|69])8P}鄐B0jllfRFYxDX̝.XZ#Q4-NYK3%ѧBV7mh$+,XK{XYVcsNB-)< 6Up>VqROU@?-9xa[s8!f+Op]AK;*(zVTLc|:gdj 5J|L@̌7aș&X>ZIy_L<@B=r$~t's~DVՕ. sGk)k+)ke[|Y ڨ5ju""N&:^[gl5+/SP /SCk]0,p=^]c|}u_.n EU,VbTuE"gd,.r?fz3<ż es*Z'4BY;򍯟[ :qk ,sus2BT$Dܹ[/iX7 ^lL 0mE1c&.Ptyʝ%x6Ss:^[% ^*JkvYq'r׳R֍4&L.81r{w^T'?A"iPРo cPK ^nT daR f&U}Ejqى;U *8oWJXx0dlΕH)hq>ܲGt^:*[bdt 罣eM28LM~ua`Πw \Ypp?/_z磳Ru-n 7mnl+Ֆcu;\6pUUEHdW<5Gfڂ.ġ\$+k'Nw/%zL?t1?F8Gͽ㟽.~}tt;Lw:ɵo̝?+Ľo'i!wxf{|}ݻpplmN2nA^RՓo}$[7n"cXS΄51dtߤ7A)mŝpCs9[R2<;SsÙqw,FoiN mxb埽pǔe[?ݧN?_Z?G7u?^zyڿ{״; ~3[7L:?>=/OGxOwgI ,)wݤ؟.i~ʪONѽs7\qߒ0.Oܙnv{/ph '|Lz혻QkxyVui:o­fN0t ?,0 Oo<-LKgq{J}>J~59w˘0hXHKGn븁d/züCͤC5=sm{_<S} 7ȟgllha7q`b*TW_g2ލ&qfOOv|O&CW7̅T 6辴#pQ<~: Wt1;vJD @"kr jylgЋ|,O0y0w/&-+>$Cλ1` ;|8B?wuˢs=$Ei!E[.=տWwW1]/;6 ,Ҷ:y,ꑛzpJF<\`: 6a"s(W u2Lʁ;t7G 9頤QBùЇ<:ǜ|_;~ᇝ/cl 9x: jΡOHH,ڔ<Y.XKqg1`Xe#Vf`i6E"?~7^!;.4t3VŧoԞB˳%& J×v="|Jp44\ ˿7zbcQ/xu8O?^BעC [쵢ā+J(үΓ#?mrw= _-Ji] }wƂԄ߻"@sVLo~N5Ya/H>RB; ԱhO5A .yȜL0))tIyo0"dӲIV.++| } 򭕧aPo6xiTf%JJi} !'.TJ4Iv_>YKg0Q`BmEiDeƙP%`m,j HI3_jrNwwkiOZY$J!5WVXl@: .X}xes+d@0&F $bv o1r]t;UNYr˨huJ̝!U2 2#ZfSҫAnw\7Lں*ں*ں*ںk. oC+MnGY);pjJ %55u \{Mjph :O @8xwg>$U +eN K*۸s2[RE\fS);׉}{zEc_}ZxsV]}Fi2D%B(\"ZϤ'O3Oև, 7 3$>!MCw ~XmvZ4fAݞHGYey&2rZvY>VOOɹNFVy' 9j>V>+ե Ի糛Rͭ:o~L-i{Fz)aB2ntvy6pRL'ۦ,)0Aٓo./i22[k!N)EE 6 $Wa kq<ߙgWTۇU/3ivw~Ȅ$vkR;MINٔk%pB?m-6\X ?";])&q= 2&4ްHc. IQ8ȋ0|\[vfՙM6ZW(&ŇE8p[5QGysV£"Rՙ GKȟ y2zrꥍZ静vBJ3IuT+3W6;m= H672V%BEQ |^Pm^Ptw&37tK}ڽ*iݫvi.R:4Q=DJv?wak20%QjYJ%Z`"1adN VfӈK ,CмKb_>taK$-9e v܊)MNn04&(Yo}oӠ PC,%o[nمTxVT~^[QlR+lFSF-JYMhd4Rސ0qڧ(jְb,yB%A>V$[\Q* `_Oޫ'~=>5godà$t#˰;q}NF9@:痍T@3V"#3y,F.R'eANA:^ҊLȹ+MzG5Hq yUyUGIz UȞ\P}ō^ڑmLE3wAVȑU>dY'I[ qdhɍTJއ$E-hΞ<o2Aȑ2jiզ,^_B!zkX꯱V@6 *hT= 6zw bJ@ ȷ!9k3Pg=&dBIQHǫէZטy;o*bFM!?zN a1mJcdNT))$#+nMDZi[+-7rЦȘ9VS%c0Soh`}PR743VަA.V|*u,1jXڛ^=Cb SJєTѺ4w"a-?+ .?Q~`J‡1_>-Hπ[rlRr%wпD±S`Ce}Pm%Ҁ\cci//+8J*+.o͛.NPUN t{A*aFv,Sŋ: 9 ߺ~W'#~:Q%b L(}uۆb/[Fl&n1gH7cd`9v^=Z|i}J **\%I䏓GpM\6WܑJH|0oSX0m^"13NrWO'ߜQ|d1gq^jr2[<[5p< Xv:'EלĄWRףT.5*]JɼT PaDaĩ[iy>gmZ˜b5nOpmsOivbt"ʲf2b rdb XsiV^6}w 2GC)c\3~E^/cd2Qb]qḆ\ftΑL &$ΤiDYXrd<.*8dC"r'V t"Ø*d-TIpQbO+3#E$x5K9ͥwPz5V6!sJF`dae);`.m%:*# Y/z!w) J+FY8&Hp卓3;I(7Y2 ^ +^tMǁ`((.D͹"0={ +ך9|`x!QU kh {a>b(RG[1p K$p1EO4ډRN`0Y$O:v$}3@=Zl)!E$2DCcR!)GYR=^/3ݫ8Nc>'I-ږ '%}*L?ȍ9Mf _RTlrj=VRr.tЇL2XrhA)(f6J(5dM0"yYǂ &xO*G2''πAA&/T/kH>C"Qp̃*gȟ!W^LnrzhIM4WU?<1܂`@^?Nl$ey)^EJ҉iLj Q)Ar!ئS*"lyuڥ*GPU _yhK#NIz+T&0hX>X'-Kcr'%(ZMўa r(9)GI E3Dz/Z;J ( C;.@:qݙN@3-^CX0}ĵ9H2vܯ(U (Mq\h;]4&Kwd8HDrfVN:?_?EԴ-Юt>t #z*I^߇Si&厘>?@}Y>T#}"~Nw~ޜ ʵV%-߿ߛP| kQ;Wx=/Mo'7_]~8Ztz.z4/, 7n'n6;tY,b38XQ۲?+R/k;_꼐Z55!k}:a<k8-Drr*I1XBtzI}5Y;Zu#Z%ifKUb7.?!&S_΋܏sZ*LJ:|7xi5KuEZBTUEYU\ /Y]&dЭˎV"~^ )h&a w~`PtښKsmd vfˊ[ e^n~8{q<)fDoW*hx\øKŴlVD Vq/@E57ͨʛ~M4mlXSC^д\m(u/]1UKh*!l"-Е)Yl]kqeR0[wp7z1u>囥yv9΃Zؚ]ҩ(%},[I0oaZcT]zMg֐'EӘ.ѱY'm^G%_z4z0ӄk/N|oOO. O&\v]~ן]~p_}3Xᇡ=%xϴ#عO=zF&~u .>.QiXi[G?;y*{~>%-ڬ4gP% vcbܔqflT?}ANmsMn)%+I8J]pWm|f+C_5%wzS J4g>(1(N&9 Ռr(fTo__ tzɘkuzApr!DyJ|"Lw}r*D &{f/i=JXaC,u>[(UT5 &B5zFRM o'Q] ӷxU9i7C2Q_vHkeJHF$n)4qhkeꡭF-=/m v I}<|NT cr}ȝr0hdߡڍs[UP'0e ?˲!h~yi|ZT1nvͦBj#PdY.HVP(k7 wwdg͍˝&rx4z'*x#aR s X/ _p4 xѐ=/ ۑ8.Dq擝ތӣoߤä7O>~r)9S˩CgJ=\BR]BQ?KyL_N;ͻGĿUiwg*\&Ǚ*5Ol~Iީ3u*89;y?/~M!\35n~z3?J< e_X\& 4Ɛ W >#+?=q*u7Y:B@ {1aI&$VzK$$.sWO5TZ7< _,{q9a8>=uA9$G1RZ* NH!PCRkc ? /FϤͥf2Ǘۉ,hmƮXLFΗFhk'[S^Ό,W=N23e\'4+aFL{Vv%ma&0[hCt{yT: 0TT8/r_\Rb '?HɈR4).\0<ў/ a.SB\Ĵh!r|a_N%  YtJHYxNSQ._d 9pl1+qN}IpoіУOi oT#Png]qp]};G@닑h_@i-uZHΤN iawZ N ɝWwb4iF Czt tŝ+sH)KntlJlP 1uL])>PYy蒟F$*&J4N~i*8>8[EI 8˅IjqvTYfP.8΄#.AcxcI As\!YA   \mnWOSc]sEM!К) F A < ə<ArzXKq!}VGj/*HL!_XZZpWQ9RS hsD+#BKi̙ "|wg /҅$^1Z@%ABX䄓 I)ïxkr&Tx?Wq\~6E\pbQO~zzzSUWx_?=VX~=?]|S:Gx{aY _3oa8}=~JB+gdf{H c> !ۓ(Y ˷oK1dWzSMnTdTth4p(n7 *tL2(iSt8nD1^z7pd`hA0ƪPH 0T@%y Jrn1!QBQYI.k7&:{ -Bab[H^8kw<-*O*'VG-Y6?3k`SC,WJuq{Y]dDq49hzΫIj7A[o3P2Q`l 3^VP+Фz† 9oHW$WKz } Ҭ|[c1WB;+*tORg*49"Y*:X Y;|Rp_b bޯ(`}?__t()./ :=h 7p_ .amHOee׌k]*l4 U^-̈́ 6:/2WBx ^q# \PЂJr0rO.焃Y%W|H% e͕zD,Lf3|vikU^Ip* 6z-`o~5Lb47HK(ՠohR%҉?@jz_&hn } kL p2//dÇ gkO>qf0 6xbF&L}>:~0ߣ=ehT?Xf (⤺P:siuRX"KB9z="9^;TQG, Q4RZ2mԺ"@#c 1EL$(-*%G#M .aMӤ"IMVJg@aJ%P u`(I@C@,q$uNP.q1#u"+9=/{ m%&LkħH,C?N?8m c~n%LR2b % ̋,7De g4"̯gNV1 B)qʹxjQōy#E=q%H.z5C0]/ qZ1@;mp*y\55b8<ٯXOd65ZS/~_Z0޼Bj:)x\x"n:27ypb0vLZt`%yy;] F >~2\pifoP^IeO2gS%\h㚓 ))R 'YmөVx7>;u7qvw3Թ2-}ЎVod_t o?pDq-y9j~6(6SИoH/Xr>i᪀4nЄC ĮHxم"mʾݲo*GKo0 M ܠTv&gn%9oQs8%qɢ'œ~9۸3).avyM}$O罕,}hG)C Sb)ִgպ&ŷc=!Otʸ:` 7M/LBx3z R=Zs-knտX8ѭ펅hLg O:|I_8Lz3l܊km#WEbwm^~$s 60nG'L[Ե%lcb}bUn#EV#[_a e]mpAAyҠ E09 ^}sX(jJp%4kEʠD{w5+-[ ɨr^D@[٭\[9I.Ssp[n}+&sN2!I#l,&*kMo7kѶ&aZT".\.s{FWk$'m׍^]@%̕fX{phnJ.Z-[(BCW\먶zhI怩9n6Me洳$$@˪fcCvo`̩xmx %B90Z.N91Ǣ՚rJ4P-Q`!4RH'=JI{-2؞? RI[G+;ںֹ*uMNnSA4NUp0lEǯ>LcJ=s|k/CD^7ʟ>j[9ašLFv_,%"!<>.\3}%˲h,w ZaI&Ϻd zTnz[i$9(( Rm'<9M[UZw+M_8=yLBbpκ$#L3jR"M(%$s-.ٖN<KɕK, &mxGUN%,A;cDN9b.2Nu  1hY XP̀T$éBtjh|gU1)mB+LN%-A0)A(qyc mU+:lj2K*j6ɪ@͛V#5ݚWN2n\H!4(jC 1K%it o﯆1DX'?t6_?Ǹz<1޼aOn~Y/&B 4~_=ߪFg|\P`va2CE`P=ؼ tޢN  sGT)Yc={Z?ϳlPA/_fnDm}p_|@sƚTG\z2z \PUϪrF!?g 3܅kO*ZjO.(NJ ;\\VQɂꜳ=< SɳIT3,o탵zo  g6 ;Y~n&KzE,ۆ9j8wԇtp~xtZt6ğ7B1@(7{nnh*Я){,>a?S}4_15[g&0봡R&SL42`$# UFW ~\>̦hry3Mh?zs?g,͝=cqR, S+lB u%ÿ >턀yLG`YH3ItfBK~KMrzМ5<:.4SD:^夣L1-ʲp8A$&ZQ]G׍NG?mQo#D9eDcPp*2N# :2YKMq壃Q]w[+A,'aEcYN0g$8]a6oGtL\n @v i\\M.XCdT EOLπ4lo9GO쐲VvM1vPLc/&_R;H݌ĥ20Gw'v\Z"WڵaYIuz̾W Ϲng(<ij"Ph@`eϖ9_|L7^^e kyʰ~*a엄Z\~o1A/TsQ8PjT.8 ~3fׂ՗;!_0T2RLH @AMi?r7NV`~Jj!x>uP\:G[-NW V[:ו+M MVO\4Ӝc.fߕ;] {%5,&@Inę$FC"'}+(o06R&tMli[ 7QSݽ9YtG:&YyBNcGZ F[ ѨQ;}UKxBt$$7P!8My | ן{ۀhlpxyq9q8Gdj=g_(fuCec5KQ=FF^m9 j;z --^t[ 8ֹt4iCwLq:ҒjEsDp hgH$%C\29+u ԉd*eƐoNfD.8-uT:_mDֲgDKm$ژiWvN61E&@v>KM) \5ȝN#5?N}dgj/iǩp?Y|̪ MeCak;r5fQ|zvP9ջUo{jRI"Er %D4G37i @HRbUPn 2ᙔLC)+A{PeqrgR޲DT=2iJתCIqaa˺a!F n 4 ьJ@sRYNXsq38Β3KRPR b@c3aMR$Q)‡>^L ƺ ޱ[k9IAʌP')GȐ*V)PF1Vb2烣 VG\pa[gW8CBXA[GQL/ZEm*BSOO0 8-kE/mUA򰌯B>n[-{f0`5ivcdDX/ ?ϟ^h:x<\ [-~Zd4݊Q&l ,m*gbۧWPFUyd8X]p>tjTnRa47 9O2$gX%gP-qDL lJKΆEr*лbS '"ɡ |MFhڕ m5 qF*#YiZSE$_0եf_[;7Kԯx7n`;L&fBK ʪ ݶD6` ֪O?*ΔG'qI 8'{ 4GVMpCj"C]&Z(Oct6q+L|8C1` G5Hmc"ձ ɘMq rᚦz9A ^,iEK;\(CAZ Jf]+&]?/B$@5ZU us@>0(˛8-.(h&J+iXpS!zI`^4!HV͖R ]yR.ʨ W"m?PWm{Dфs&:]1܀(EbȴsD˝M y30"]0=1HXaL\.a `tI!/09^$HL*!%A@PBv(!=j煣576Yք>L#tmXt:EwWwߓAM#;QnFM)qsɭc AMn~O` nD3k,(v%NqLc*A F2Re$Iqr臎lz_Lhꃟ>~_[~y&q>ejEGBŮxAl;d/_*<$~Fsw@C(2'QPM9U܁TtzA:ʕҜhBjbh"A1<Éa`qu*!F2<(hc*kV$=Vu@`Osia JTpmVUNJQuxW9-PBzHQae!E)(=Uaod r]_ea}pnԻBt;4Lɽ z ؛M47F%6;S=Q936-'=3zwM`\j#>VE!4b ~+>:EfE똋.u  2yQ2B ' W9?+![~x5gq3G}̻| [=p@jjkr 5LZ-~4GJ-4 AL}5BqA*kRZ(-u9˙ιpj7"!v-G% Kqb .c׋ mweqz _lr'Vj}\x9]6Ķx6rie$X٠M& <יϹtBB2, i[PbFz ?zD00Ͻ%pFOK *rn<%x6kpTJVUM.E;]x@ ݯ=xq~ ?J?s)yL ,h~u>'h9Gr9\|R;hfBph:Q>3޼Z3}RB׫2o~˕jUܦ䔊*b Ǭ>fom (i.kI;"*ŠwyGeI#hd GwDjmےH_V',K.*k|h%0[Xu듳v 39b@Ȍ>Z䅲+%;*+8#Afq%^Pe&09IsM>Izx[騻?1M5zi;w+3&$U@[MiF2$;܃\0o7b4U0!/* 2@iZs_WfE,2#Eߗj.)Dw(B 4^+Nl-Xi(?V|ŪzJ!ZZ!% HME #MlPg{"hs"9aDw&vĖvWP0ŷ2_nl{ }{L-/ :Le d$P%E;V2ƭHN4m/={*d+~а#L3[8{8⚉ҏ؆O-6s[ʵ珳N$Cuicͬp$c>T FOtAj$Q19Q*jjQFGrI#,bgj y&~2dZ;$^=ŇY" <;+ e v43fv0t3S/߾ WzY$AEm{8dJ κu=↶Ǽib⌚NC|[.(\}M!S<D lWc=MϦz^xOH6TZps5\H(IAU !=*zLQ`\CT*4QiP'A68 `2'Jsйs *@PZSس項λZZ9L@EڞY3n Y,HxnO}Hi+S&DCu #JvjetvJSBϠ7EUjW!CBa?{y0Ռ wvG:4TBi(ƏU䨏^7br`LGJ2sX@գJy.McAm[~F\APPIƌ/*3ajq%KVd~_hFKN0ō2Nbq&>׫t}GW!G+e}zŜ#H`{ۣ%f1|mwwуp 43^]·a[?E o>6ZBh`B4x 5LfQH ,TXRgJ+ZVKN<;߼ q˾k N{ǩlhp,YʡǛ%F2mN(A'o(E\khro4DǑ:W$l$E\cq;5QEP i.%edj\I"l-iBj5p~ASEe[3a%iͿui슸r!GjpS k8sEy7u~Xlqh*Fh`^YSCVʮ8ޞ`80v2Bo/o b`+cЧ"崞6kdt?=.9SkbB%l帱c?PWxDW &y){lz0 iӔVڷ/boFOx'J'pAR-1pQ5"'\n?zsHJi#(C]VR!7+qO#Z ! zdB%x-$Zn"R(Ei騲'9XBOq:M &WBEP™>"µ\bFw\Y no9=uʯ>rp^+tZWU NJ;d/_{9ZYjr`jfeZMLNp;ն(t9X& T ׳{7W!3S 43q/OVx 5_՞Yud|dධ_{m5Ⱦ\##UA"B( yN fYLft(Նp+D ҡ @{^meIy/fl7b Zyl`](qPm}hI`'2 wtaԉfc.\񻖌5XN8\,s BC, L  Ldu`hCE.$ />UY/odP]>D;iJMy5YwH]6򼮃9Ѕ"AÝ:HSo" ]~{?.'pYfK][o#G+_r~/; s r9y¨X;dO%r[V[ne+ b}dH\ÕG&ݟٰlhlqlMFnM3234K'龒6um'CV2$(CZa{a"C6'hb%4{Ryt/|,[?7xpqOVƳ~r#;͟_O8VpԆ L { fy/{cM&XdX#6v m5Od'A1Ic&iϘ8 F8_ɤYE%Ϯ3rHM`s-b"jQ\k7Hi^Ѭ{lI6>Ec|tR"Qt@!#K .~*ƷF$wޛ{I21ܗ_D%)%})jb ZXq2Bsi~6VT!r޻v@B8SsԳvNxΦ*8X;[(o.XwwdHڥ iYz٫W#N{#9"7yރ]B.W B^}%3"RNmOIOkvy> '4jqOxqL4PLj"McbBtSͲe~y35}ԊQ{Jsp|;SlGSXfnκv9?Ƭ^~cݷ1xca&hX8``ƒrsqeq0`_qISq n~*jmp9 ԛVaXns>T; %SBEUQR"&6S*QۧOs.4r70^}"!;%U,R8BJ!K_0ns^Pn൐j6@:L]qm%Q &/o&"9Ty@ M T[w}>:9XWk9=F>/ycM>be둇=#׾PD|#[Y-~EU2Kd<ͺ7=^ ո ekݜR mkX:"Z4QLu3z\OF[yqT>˅jd;g^s'?iw|R J( 7oz{)ڰ̎pB-:.- 1 t iw[wSNd ba9ǥnͪv-n& YGNQ&Kh V1O-5jXx|3Ъź{Ьjpk *en{z 3;zf$&ء&GeכrDu 5s&ziU;AT> $H2nHpT9jE=bĿ@nEaD2#+k"5{*4DU) (Gm&pL jn;`=:W8QH>r$}L]Li,.3&J,qЂ jImO]`\N6}Ƃ kO3FB\3)Q^25#a-e2D-Vt3$]u|Dch)a+mӑKHNW! B$%BUf=7Ky9o^N|Y?9 5_U֫F?bC yς!Lo\~Xje:0x^EG s^n)ªN{;)7m(8RkR0iCE *jKbFz8xQ=€ՔkL(PdzXq!ʠSF`Ke9!Sp*k 2XT;y"8A@Dikr&]Tt!@PdTȮ§mM$pkFc} l4~A^G_40w-F)L'q8Mo߇FStz#gLdzC L|{tIGSGwY;KMV 8 {?z jcshWƎqTîq+~N|Q2M"PsQ^/G%R8H&INcX>p Ֆ[&Knp<5]q,{S ? 0ܙs%M忇WWO2K_K ظ0ak@*`BBaeO`+2a户ڑ%fAH%7w `9V"$B];'9}M霢{`W7*PV*²(!6] #Ag9dґb-–`P.u/cE?:kׄk՗oALmw :Z Zb63"ŹL8ɋ_?]^1ag7 ]#Aǐ)7F#WU$=6 nS}hASOw n/ҵb VSt[yy7ɮ4E$odệOJOރך>\X- Dl_y2d8_x7sdOS\ z7WΕLX?ݍF|"p m)f=~{p}wL?07;C18>^aT]q$2JX>5BB2K*&8łՕ%Q) üTݛs^1ʯ[ B%Zs> #V(\#gEڗx>g3jQJ1QO,6e=(&i˖RCbC@bINb aͤKL^k.S_rR/SH6 U>jPDes6˫O3SmD]9fAZCK?܌SϣTχ_Rq4Lx~=q'Dnno~7wwOS*V0+GhCK2$")h)m"X,ic]Y8O _{7odcSUY^PC* e6{;L .X׆}ÅA/hwb841oNir1Pt-jc[ o<LnN^lzܣ_ƙ]; >α[͹Wz]FT^7{MuU`ٷ"4|.$ǮE_.V-Iu_jVCAr=WzZnrWiW/KZް!nΣ;\T#z=ڠդV&"Ee6#iq ?gw= sIxUV5C_ fzMN'WC6L!”՟{!]05_tۇ{.׋7ָ),PϷٵ^9-B'v:a4ZǫO3kP9v0Cܞ 1Qm(*ٝss/rG˼dŽyԜdc;{`&1ox%Yq$%g}Ԗe G+IjQ&Y frSҌN̅^T7(U }Rᰏfz BptNtZ* ;t D^C4y0So}=j2aDA(33`f'4U~I1^\tƘܤ~H)@#"Y0daH:f*(1ɓFLDZ"ȁd*IW2xC-FNfU2 0y`^:6FFSM@0?dO"?zaa(( bPv.VjP;:FՎ-A6-;$!#JBKN7:*e\|10>jpf*do ` aHu)sF@stS"pq,HkvTyF? #(C ;9:MGO5)]Bt {dgD^CW5F%_Qmo2hBw/e*<Ҍ|enFe4Բ3ZOoF.&6qYfUFK 0.ɍj]hס䒎NR 5Bd>=l`!)=- zdl1{J'#刉W;8[x )Ǝй] )Z0 Zh 惧~F_|6\ՉerGZEk#`d/0c@`h!cASlaDť"22)Yl/)gANݞh5=Yv5&'Jm 4h=Mw*i5#F*=o#!^৸X fJCd;<U%`-b$mlq(N5C-lCīO8`TZ( GI'x3#U Z(2xV#)SrZ堅cIu,ذΩ ?DM ( 56( 9л Mmt*IBx$ :"16Q8x3{i (*R*[+Kp"R1Ns%"1)HLaXc!Z O̚ar:S?0D 濶QUG`Ljir!%Y\bJв\CHBSuʪ5[} +'+1J)sbi;AS+]Gq]I;ڄÿ{1 t1uUJ"ITUPaKbi+N[ȶrk;rR%8BNt'qVц HcLwaFt n\m lXoI:?OI\̯OqYvvӲ>PeYFe9L%bI}LUs8~ 1~B^&K{eƋP7͢p/$*5u׺@Yk;U%bZL_n~ޥtˮe~CBWў,hֽ3_o3Xlw'#LyǨ0p*e1?|3fȲD[y)(u wfLI (U>$1T\u*xSfKϋ/{n9#4XgR0W<0/&گCs6)3sXM.}/R|ɯ7ߤ8>,59ux}bIx%vvs|7''}Jo,z4PTH"b}E)@(gq9h`J]NOSHNo/vӴX4?lː^b/-$F>?hqLZ(LAЂqQV ʹzL#Z(ilZ uRŽJRIO@9.e!1k0DPĺT(B{(ԱbLJY* 1psjJ=?1@X 8A25ᤞ@` > wg֜6D)ؽ,5ܺLVmBtXO(+Ul/,ǔ-R߂) ,cZKPT&2g4DhMDʢ&Hi1r+t)aN`I (|+NXa,pM0k,CH;ApEGQ~u;KԚZ Gu֚޺P +!Bq4^b9`fLKuPc4iuz僡tsב;0EE$l( @z4@:/5wY !Nɵ+>y V\]%jGH1:akym`}_ab#Nep#f2ًbX6v! d(lQoJm+\suwy nL9hs1}/Unzڱa ஈZD F{%q8hF_e+՗[F_QJJO'%ᙟ&g37}q+)<=9|U9,/Y !tic1qr+lفWT?HGvR"w@AO% Xm3ntRgʪ" La4Y(upANmnxu(8P@z%L^`意?KQ#ø(cV d|cll{ @>/os `#YB/{P{v٬8ɾl bBQ /,O5IIÛ4ù\0I|_uuUuuGD)2w|RnP7;r+d/mO!Sg%SO*JWtO0ZJ ,O>o^gxllYwc찜) = :MͶܧWBFUL^VGU 8  cݿMLhW, ԪV-hi@ԫ~/o߾ggq.TsmPO*mȧȓ'g׷vd?-xp e>N0g4_rf INKU gEjWs_x}oHr_xT7~=; >nhڹ[((tjM&W#/ԣ4t(*8*Ѡ4ԣ >lx4 ܓߤBz,TԢA}@QG3= StmM$#n{ &6Șq$D6&4<NEO}>s8u-3"_l4¤&&(,7\d3\~'Fݹ?Iz `4'}a9i]|MQ%pum%7xZ!y# =K&DIޘdH2 Ylm\ {]GCο7)ٺZ*Y+|VPsVwD b`J+*% ,m[D4 "9uƐ#0@ 0c'et ;mw ;8Q1]p􄾷}7ry兽I^no~td1B'MddOdw}9R6n}(cdh{*'Tw8op2,S%H1?.SŰ{9ߙ1~TT=6pElEu^H.@'ES$RGSYajhVJ d1zl~w߽ jqGFx%H(6)"0X g}|⎮tIyu~a4KCΞgn Yٖ}lPjZbC[9aZ 3Z -I2ڎmr@?oߒ}Z*`n]"0=~)l,,Es5(iTCcj| 9vǮZh@[cVh"{JJB;WrHjmk 1믪BT^ 8f }q?&8 |Cw_FS -~1q_37K F?xŋ-AgxezΒLòvЌݷ8uW;;ʫSj{ 5avWw>ʶܰ{Da'rxܻu'Qil&QڂŖ1j OqGB/YŽ?܍s GUworGn6 89 ٗo.&|u<|p'? &΅BQ'uǛF#iTԐ|*Ni(vli윩\4E%&((I!\\1k@ûfRim߸,LE7 @|IVf6ZR U3rcMNZ'#)4?yt]os/W./;cIoԼUr2D$8ƄOwj:sMu5ŚSAdž3tNs\Ţ,0Dp<{qk&+\zr>uop׼ۼ_K?oz}A0)=u"-&)ЫT9N/lY'JH!Z-*e m|%h<@DL%AZP'B4HK )0Ҧ_KчODC^f)އ7p RXX>MO\jݖTrXIhԄFpuNj-a]i#t_:Q$սw9MfssV ?G߽}pd~7a-_!T ~z'tf!zGW)z+~gh'oeshgȖZ2>}w{gL\ EJMqvSJ% hpԞ6aY"fF!*N [qpɶq̵U^^=^]U!ނǻ:X{D^mwr\J\1$[ODՎdswKByGMSb|qĠO MI5=ZJ1-|yo @XStcg"mILzA*`7i޴5paʘ#5pg>e@rОrKQFNFroJ6V~`WW*8JqRZiB9]脎JDKP9 LW^pĂBTN ND*^:eQ:Rk.ڭ#۲TA?QkmTR4{FިY2MeP.DJNZHH2Z{P.Ԁ(dHXrɒ[-!c5, @@6T)i-q&UJBҖ(ioGܲ^@D kc[՚d?ḋ乐:֑%C4;nHp,K1$"T~ԜQȠ{ql14EA3uQ)R~il)R}?dS&4+W::OMq ֭.eT3Xy9tfjhYք|*NTaut[ g>v3VVm W-ezgj{Ƒ_1| ,bL~A"nOܶǖIG"%Y b)X,> n7B=C({WE6 7c2 cs\?8r"W3Sj~;ŷoWvM3$fgO_cgxDr̅}^Pd^z=ϼ4k3{j8N '>xgȋy_sT(rKli{E.l'FEek»/33^оטN*T+(\6!εh%4K%;=L24h&;A7<p8ac;^~R$ޛyߛY=>,FSfh/Y;WsÍdA!QȬ />\odZ"*Gtv*GP` ,R9,{mIz*E+#6Z r[=Mvbjij[Ť*g<sj9:SPUaz'TғjHvi'}H\909&̰%uZ6Vph4ՔM4h@=bʒ jK 1慍I t|cY$´noەݞdm<$=[u^]ul]iZMԢeO$+~y:þX&O&"QJ9Oi"d!YSȽ3bu*}riPUX WOuy"|2푤Ӡ.P g7GtAV!1,ע60%QFP 'X1\/gY20*ԥ9Ot ,fO =xv>Jf>[ЀS$RQBHHssB>l >oϯ/)q tlz/U݊5.հFy0_?K%m\RTMj*j| R8饀<9lgni$ =OZ^?q%,+!ਵ}5#[PC ۇQ׍T0<5O`9Φgo]G{)Ljҏ~uŏNW|g ;=<OQE< e)j; 6tyo<ݴ-H!!5( †I9 $\) M{kA#8dDŽZh%A!R#j<}(/'Vǯ! Jrwo `J I9ojMqC$ɂf1&`n_H^ |]|Κ>3~9.D<&cA6|_u-MD&f9Gkg'}2YA`f<i5Δ8WZhI"aoKDѳK<%GCA1'Fp-19!RM|ڍ=Va,-6}\M:@"u^rT6Ri=>"F U,0 u+)DcLJ%%M!}]57 ERϔV i J&b3nak8g!uYFXe_ , e5 |m1Hs;kk7dC{vB\0vc>v@DE' MZy+듌4KFf4Ϥh~Uho|ya{K_ QO ~}Djr.ō»>C.!Q Ѫ$Z/VcY+-+'祯x=\VO&oX!.%SI=Ӯo\kNo-ߪBpUe /e WOwb4TFQ/&eպTTB vJEXεZ܀WT,$UnFWH,`S*&\6^HgHxu6 !wMЄeӿM(*k%%4ZG崿_Bd䔴{<[.Ѭ`ZcDDF8aB(Dsɿ%FB[(PXL$TB5p9\KÕ$6v6^:XrdB!0T0'#XcLI,p/̪$0#sh 4BD!{F pErj`x;r\  ;<2N0;e {}X^lZ@ !@!\2%q vcHMߙs1ȉɸ&5H-!lgdBPfWε셯/$bx/vZJW<Ŕ]<| ( imU-3*>|^P8%{|uFx }Gݼ.iϐKzDH=bIS(/"mމʼ >^#MpkHW^>b i{Ζx3ʈថ&CPvޘy1kÝ0*'˻:bTĥys =7 dtGV{VOwb4E(sѧ9;EEŒ;fUr*)*u3Ґ,bq4N"P08n Tݍ;7{tme_+u-tks|imQd]<ڹе g"|8G(aa8&]Z3КTDɪo]}oCaA%р  aJS 10ĄJɿ8&=DRcNYHUdŜrc7KM9V+2E Km۷|IGN*b`Q-D=(ߥjaKv8 \L')F6ٮ7?Yg$R()s+$LbxhK6ɏkiPGa?:'cUGh٨É>-jSr@#](& y.G] /p=UC)d-u3%c1DvMTXgˬ*px |4Lvs`H ntE} K"[ܪ])=L";:y>t!9f͕ HFTzBikO+?1&8C[=f1"FFIVp-_pہ6 6f3֕P?Uɤ jaH] C|tQƅCt>˴^^M\>GKNV>~TPY /2/eYU_s]l0rhp1ikEj7O"L}f3Su_4U7hQ0 72JNc[_ JÅ&r#S"JAWnvІ^ r en\γTE6o͙H6M92 P$X Lwx,ADwA p vղ}8@[/}/QǎcZ/7 x$,K>kEvDʤ^+"tbPS7[&V6mʇ%|Hַ * 0.Z@1lRڎǠ /) sºa :=c0MHVk-H"Spe"yb jư۩9|R6, qQN-sMI q㋑ntdagA3yL dG 16^q6VI^T^V1mKiTFŔHD$4yD d-,*ic0}44Tnv*u>%8~l*(j FN6#CmGNlTuju=3PK ~4i`tXwENj;M]KL$A"S\I>ٕ8 (:sЕ{v 63}aл-5&6o\z lwfIZ]l}lde,Q1K> ,ֈn+.s݌C@!~sF%e8p&2"D:HbN"8Ӑ9?1`DʩË8T :Byh3v$ hFzAW>21z>Rçӯc5Jڸ&:z>$iVz1 @:5@t,K &8u42ic<NapujdmΙ@ٻ6eWܛs2R=Nܜc#W$*$ o5)QMlgD kZp0좐 C$zBbz TAF l,Clw;־o giBLJbnbH!1;s 3iˢ~rG#OnefJRv|5J=8SG+@4rGxiVՊ1뀭Q릐fw%4Rg8m1?zqT|\IR:] bT`'4&˭;$ qV\=<TAhz"R%ki2T &47/`4R'sSjf53NZ!$j{I>C!B *XoG+{ʵU3W1N WI]՞fr?1_>y}WYY13ǻKFW*9VY Z|+q$T㙣3 ?[ўeO?z(TrIቧܣg&zmYj֒{pU}+s>߱H[ȍA?ֳgf͙:q3З5trH[O~z9ikR>3׻*vEgsE1􁕓%ZR'1i]&͠~JMlҴ?sݴ99Q׼ya?6bwupɑno~5V9+F/>\0-xk}Y7> E7U#GʎDžpT+R$XDU40ϐ!k"x+/"G[]ĵnh@΂SU,+tΪ>B!7`Kݔ]JAQ"ϨC0jR*?L!A(hQ\A,vʋ(LJF1%RbPc4a2JdI* Ac,73QʔWQYQLX v`:b&KZsC [3 *ތcqjαHYe @NR$ɭpQ86XM;r[IC8E#kD:Q+,BJH)xLB Fcb Z20|Y7ݓq,z9r+ KГ07cV%f )*\:wD;e!YPfHdF8i)SNCV XqfmV1%D{w{ ޸ϗ񰬓Gd}wM۱^/ȘHC^M^zޭ0YPz ~] b\B׾':? j05x1 Wq5AO7Wf8^}rӿ?8C\Qٹ'7LsN~o:qCe2` |DΫDG" E( xq9-&|syq`2v)6\+N?)KԼCPe )`oRP!)|_okes얒- u[RU)T֨*|LI; f4]h?0鹿&SmKdP"qJ ~^Y$s*f 7R51Kts 곹<#Ւ?}O_u%8K9FW2?HLוQ+tY>u~V=|lK"ZgF~Ϗ/`^1AC۴4K a[wIɚ)4B yc0W?nno ܘ6 c= y7uf~gWw{7Wq ,"b[w[C[жQwΦb};F9W(PJ-{ \NS윢,G9R'Vkv" % +I^?oXKFV9j%GH^*| Gs(T%)e>ɉL)aξ_\ܻ_OǔHO8}zDg.%2U/wK֭k֭)c붿SQ ۶n3[EH?E:VA蔎uB 𔓝Y֭ y"Z$Suu+ GtJĺqZKJwgWdZ&$䙋hLIyl`B)Gs4 Jɣ;^LJOS<߀&zUc vSwNVU|.Nq)NhNL飋bSu8'4 vlqBzrSЀ&HZ1ITj)NxqȔ"G'h|NqB@0X0aЋ W&t[)@HsZmBW0+ܓk5.s;qξ+Y/)6zC߶KOT?\}:]+0e1aE=,C?XQh7M9.}sCKQ<>߯0քm?X[(X =Za'_d55B[{2eKaI) 5l]!H1.CW]toTaTOu{ '02M (Aya)2 a6>1zZb "mCxyLupwn?hmXm֌>\t?UϞ}eUv>{gHkAct+ :}M&_}^lFMŵ  ( K!Xhi Ϙħ᳻f(2,2<0IDJ"Ǡj* g!uF˾wYW_$նer]Ec,VFtJ}{:'L9 {&2 $BJAQ cQm0&@ U!BT-9aBp0cZ L&>ވ0I A׀BF af&K[CkAa(P';Yb d Xs Q5E+gKt5trm%`%88% WO5u=C10jfZ VAG +E4q8_2V92ǂ"8:u^Ι 0)e-˺p1ULpɖw7 HBH/gH͚pVJˎHv+g瓥Ӟn351aCgB+ALIfozbLguUfn6&8`j$Q P->!KZ(:e@+I?sYhp!iQѻPdWv[G^(yY7ꑷ@S0d4ۈJvs{0Cv*o[0 +E{;d<%ST2dr$j[[(&Ÿ>D\︽lCm]/Go񑳿_ǒPL9t)%db=R!<gT=ݾl4jgaC7P-a'UFVxeebbEBAT+Ƙ:Ϩ;YB1!Og 1YkvJ>y1ʣcsX]Ysɑ+^ouì$1#Q0$aٸ,;Q2+L.p,ldQk0-O,|,dp&fZc3-Fw1|59 4z幛9J$ĮGp=zX$6{qp+$Zw1h'8P a uY։zF>\\R3A8 thƨ#j*fd` nk,Ud/)JFfX9h8EIykhKhLL AKPU>eKrjTZϮS(A( ) ^p~^|۠YM8fxWH8="$uh4`6;ۅ^\&cx gr4??ϖ++7+svҲﳝХQwM[j$\ڼFCK5Zibָ6ζƋу3s7[ci5{x MO$u g!tND*,1#gEtQ "K}#` STQnWރn=6ゝzD(N^Wa0"e[ K_ZK'qm|tZqm VIAPX!d ̂DXj7A["RLu2|T;xpi24éDRR}XaqaH%P;$XXV"$BKY:uwQ5+FJnH " QDp6y@-m%zv]Mr!,{ TÍx%CHRa KjYx 1JJ)VFʅN8mmgfr W |OuT*B|lL@B=u(?}c(9Һ@PW3WYiwoN^fN^êm*3 !!A?A5B=rJ`iaӔB(" 5x<0#/Vf#,mlFhY ,5J"crZ?Gь'`Tj"XSSҎ2OZ0{ߜ%`J ;?\^\`tN|v +LFs0Y.OߌDY=W*k)R`"8PB@j*'Dğ:%CT{ 3P oQ-zuOঔSV :~2.~>+w&fY~>+CAGgxax~3z;5^ˤ+qD T:۞ |٥[H)H8YaC.AFcm>yQ<&+"]޽;]TB?Zb|!U[~FRl=<^ݚ;sD E3I%0~XW ټT uskcsf⃨XDm}ֲc@hPdf6_݂9 `]fVskɜ ͻdE^`+G"ڒ9G6$TR]*;`,,׎Q*:Tv[[jkHS"ѩk8?c-YtߞJ%o e5o.je:vS1F#zsH'~rr L('{0܎.`R9kϥS1o"),mxܻO&k ls3F~:;k)Q 5N*xuTzlu鸸P 3"DP*98,G_9L,N$^+l"xPe{m("~Ό Q0$P!(Z+D $dAQ%Ƒ gѴW57<>/1/"5E%{ !!ܞ)#ja]k:t5k%9q+]ɠ[ <=BxU#w=|(oO[& a6;ZXF 3 )ilZ u4gw+g/k,Vo=oџt?o9lƋYm1xm92!V/+_VT^v^v^v^vU}l2v) /s&4[n;Yκ-_/]MƋH+%Wbu⮕vؘ,$YaT&9Jj;:MS)dG6%iz`;@ZkZY&Ҧꑛ.2dFRf~@s x8iY}=`̰`BK jL z;Vz[ne<6X%+FNG3^:y $+8Şw8 So> ^G- sWHT73G!Dz' I-@]٧(#7@=dp4A};N;wrpmO?=ָG*HTNiU9;z.k$qzI_G]uW?~Iq7T7n&1lZ-Z}6RO'T8SIuogt^`5=&>:u^MQuB0:Eac^1ѼL?@/xZX1$w"Rebc""-¶`!BsTcSjd븑Jy:n%OKJ5h=(r#Ȩǝܺ [ -Iz.Ͱ`x['S3)u&bg LR"6:J9ʥJJ(#bD1/ f^(, T1kEO5i؃ IˉS2ƭN *WqmSh#OĢӊsKTHuq )cThdDHj+,h+BFpRN$ qY:Ě}s8읬ڜW9VsW2N@6(9YɜVA-;Phgjw_#6peS  5A, H2xLrHK*(5fn۵ բ%hQP$붢.GJmA?BLSTf^s4 j[@TjXO9[SZ[K, ^ K&T,z65cO~{:͗XzV>E~!SIhbf?,Ku{ܢX?gVag%?>Ҽv=/!5O`yB&g }Cz(p+]}{u[ N>9%Xzʗ3&X xKй'5 `lDLw !EmA[I4n.h Swem$IzY`I}C hcz !*.-iϠF)1yUuHb/ۥ/"##"f%F VY * 8C>Hrjun444S {sYojKjZ" \K,u5jK`mXxת?d3`Jlcf^̈^F:ChnwNf@\ _2ѧwJaKN("yA\#J-|f*%yħdb)_#UEP0%R6TN3̴( HʲUΣ 4B#,1 T hD[uYr2\ɻ}UExjk5=#cA"{Jd/tI gcՍ ;S<Ξ̑"N4['D78 ۋ,9[9E_ Z]u=/X5zqhf]i i\ {x/~I64=\./ל OJr kk5b,}_jB ⅓Ӏ|q/AƂ8kW0\A\5`u] /G`lpJ-q5v_'um SG 7zgo6~Zʼ|;bQE/>hvŇ!~D!&Ԁi᤺XR |kW)B(&J-?c(+-6+K7j>妲"҉V1AWH4e#3Os{CJaF0Tˑư Qy">3qQjb߿[-@7jIAp+ۏC ܩ\2g#S`Fr,-c錔YI82FuvV"kaaZZ+Q04%آ,rɬDKR%j!sqM)ct\f0Gq1%Uwz@pR':@ s<6JUJ$U( **ZkE@Naսk(BzlhB eBu$)TKxW̔2 T(خO1I-t93 RKo5}Nvq]^v? B&sg;8&ppao9EC7kC({;YW8y#G6:lњjEz}&E/*u9W՗oZNsp~W۫|݌OI._ˇE1]nm݀_1z_~\o)w v//-pGR/26'n wlckݫ{]wj, g]k)D~+I+^NXuvӌAb":h݆}sNhpM#,MaB;ޫ)8g_S4F/;^}(PVJ܈Ӎk؞DxO &k״.9͈zqψC &,X~Zk&R{c&!?9=~3iKuEo k1f }* 8JYw1N1C;~7_)eG34L%t! aѝ 7k$ƈ;m ZrƋ #xAyOr$Vԗ$Y#W˭sk&w޹xTM8*뮥 S?e9F11phzs퀄'"dLKr!nB1]b٭Ue/QNA8v.! IK̳3k2 k`;{>쐠kcɬԘ 1_]ט$jHCYMl&DnDY" T䖐uy3p\(F4BK1UXjrMdnd:_(׌Y)57L[R5Ɵ &H>.=YKP%#!(͊"KQV*6=r,-)2](b"x^ֈnA5\>8ݒ.)Y~!슦4q y̿>|1ӺI@/ܿ~XqV$;Z7l; W^nr0p5$ ۋɻ־oY(KɊM!kلt൏hdžH?gorWcC0&Ղk6}܎F1k#sڑ O}3{'Ivx}Grfe t ҭEzw'P GeSQ.;YV*쏛^ɠb$+R톑VRdN޹|k}bx] N]Q]B j JS%}Fѷ.M!h,QH $ZG50e `hz/v5/5 7[MjOK~/ M QvJiYL 8ʤ gLMk4㷗vNFg2u@ N-\ epY"YևG4GXt$0Ƌ>}*$".XH Hi8/1xn|jn6Y<^4 ٜZ|,cw:~VkӾ _a#/\`z/Kk;X9 ֋})n;xuW|ܩ?Wk"P{L.:TbQ Wl"C΋i !g{ JL򫷻1EzԢL 'UJY_d)JWϪZi6a)ʭ2a kZms$x5ǰDs˛E=gd-~ʕ[j6s^.FR?QFC8Z2L(7Qg|%f*lYs-&Xp)2WQN5]Ndžӊ &ǧ{4>OF<}gJx=Rp-^ʕ@a3$2S)h8Y9j.dN 1f27Rɳ!Ťʤ1YKV6$;Z=/VHJ32͔$`H%dLah), =L#Dt^N}{3AxMDhc" cpߧЌ &l(ڻ c/^1ruz5̤ iъN@5`-$&j}^p1H0!RF)QB`)aV ;%I{Qd 1%8*w˕5͋q,@~6bRae'󻻇 m6]hzU[bۧ_N3[cr_'SsNwܸ?y6oe8cYszuָ+z5º+Ko7mG ? pS:nB7h}Dz h:0 B魣V *xl rj+*[؃U`b߿[-{#]{ }A >jF "cOCk@BTU+FPqMz^R a:]2Q={EH>G(>Ww5@<#ő:T~{'Рph{k£'ã  JҀ`:W1ʰ҇1j:vE(\ ,p8k|E=Z.:v1"M4NYP=FL)JocV@ն*v<0s3O5yOUp˵%Cn~6 T!OCn]d@d)8ŸMZF=ylzp)8C)Zw';PIQAECwzޑa>؉a#4BB0Ժ:3ꉎ^}/NA2]ڂ0hHSpx<8OEDE#2D,;L(&B N@? pm?Ք'>q eg!-QVL~R&y1T#No@+Zkst6_{@>L HӞjր}0~׿& Ï?vE}wF .fc]!8f~Lw3pB`N;›>sj|C7M۵aw8oߨ')HW>RY1J:aD̔sҚ1z7!)?iO&dqmY.l |s; a7+-N`])3$"vEjE'#D2䠈B-#=IEP4EzX$ꓓE}"҈ }QCHQ8h'v}q6]e\Ӂ 1B^1$#mEBqHhN|Y%:kQ4m[g=ݽQk ʼI%=ht)VKNm*n+½HebiTPm/ GJJqCK$d$&ISc-B*PQ %tг,Õ+F:.{Y('0@H;EJBڱbb3d RF#E!\Yh*+ f'! aLif/tQC+0,ʄut"6Rv'؋#:R rҩZeNS,UkHMi!MQP?u6)j>)6"ybiUCjZ/'p_Y90>D׶C-l86j'tçfE!6a.w>M䨪/|\!CFף?̿jdݬˇĎV)/{$cŻt~AtC+l>#iCFͳe7OTTE|mSt^Nׁ}޲T> ~r~7E$ؘd3,9얊AHI}Fv,Xgݒ:uCx=OiY^aЈDXrc)8C#}s:@Mى ϓkx(RzosUގkCj&Zt٭`(ЏdNJt/?[+e%/D'ƓzIiڔ+C)5QZJqdW`c-UHgl)~2:c6KPN+0QJj$Q9U!Fγ0P03ȔTWQWiy&-X/^RS^!utCa:ĈSBF8JbQ J;S1U:c@ZƧPԫ'Tޗ_oSE5>A'ŬWWbĀ WX~Mj!?|EY}SWϰbϏp "bȸ9 Ч ,()F431wϡyXY@}^=|%4S-ocM\uEq _{쎐ZpLcN+`\) ӖUd B(IYqTt5~mC _r.ojZ6JLi{Vw0R ;98 Qa93#94,ʿb:xHa%Z'Kܐ/>XK>$e`'0N!gՐp__x23}v}%Yn6MmEzwπ|)o_ϰ_؋K~٣n]5]}_`Tm>]o1uq1/7 חiywu_"SR[g&[fd4/G! /a1٥(?T~NY7޼X( k C[@Q$V+]Y3xݹ{:+=}"28q0vWRM߻Ję%wو()#([2*sL}gT'o>]>fINU-&onv'*ݹ%o,tԧ8-?請n97$Kd:~UՄQV>NLnI [Y[0ǰ1ʢ8.ޱwcrJ$V!l J`@U cv#^:EwtZJnK* 늨9+10Y YIFJtuGN]uM4*A7 9+û(^!Y%L$TRT]g#DBeF-y8v,Tmݰ!CHX |V4 vW. ާߐt\m x7j}0]Nb <0-a0@€MeދϘAEUqsًGf,IDg yD("T8p!l94Hl E1+l\iK潇2ܵ7=tqܸ߱%T.qeUfk:b`s.\gxyneeWÍ۴q9v̏RZem==1 IDWpg:Q}F9*m_P9=E"5m[ ܸ0;}&3!Q;ͣ<$N%BW85CPI/zmسIص}eu8/yij9?5,,v#xm/Ͻ+^α6r[Ijah; \`:27u6}`N$4'44U 쪊Z օS81C()g 3Q8 ;bdqRoHLUD C |nT٩Q4OfO8]g^koJJ`G09a`vTPu@ VOe ZBoQ%>jMG׈b"siNڲVWvٌn/eaTɑ}Z$i߄rT5qjX˝??; 2~1¿oYz=R&X,DA|pي0ŻS)S<Ԣ0?ɲkW%p! |N!n-"+;W ̖g| T錅 hfҌUTH8z\Hrɂi9s”0Bs.> ʼ]*'di9W{p%CfL}79u@c9*eoq(/< o^?Akۊ|[M-G4BkՠƉmM_P _"#1vݔ>;[Vи~5a3~pvN)7dsϛn4e48 ]|04&ҰHrRKҒA[4 t@*+vjI%U[A)f_濅/A@oyƠ!|HA0 ZOekb)W%pFiR%ލ!b{t;޶W+zi[EUVhAia Vp|uZ|'%63ՇjcSp*9˦V9!OC4XCyuvJJC?QԜ؉~>"Џ3ٰ('ET' ISbaq<_S[9i38CUbfIqAugR/ĉ|ЪAי:J`B g<'OS){B@Cк~CaGС)ћ9H?IsxS #8tGazuwĈ]ހʎSQߘÃY֑Un |v?!K?Ӡ8k?y58}(ܟ0E//Ķbqd~u̼uS0cw &$ ,}8 QXK>+:si:bV 9&!)kq=4Y"7K/fx$rbjܸ ^c]k)˛5'K-72Zꁨ߆S@/B㾃6H_;A0oyƠ!WjxJA`J"xM ۜ:qV=+V2za+<+!(cy}%},^}8=Yޤ|<TuoؿsH;'|,:xn7+F5UcPI|u*G8^a۹ڸ2?k&dOa/Y0v÷^UaӁ*lAMvJ:`sJհvDNND#j'6 yL(;!q(@%![Yf?۩gJR w"?Q{Z/0jrj\olH>8.l*x1pn|ÍOti&:X*aHJ+Y3IFPEE!J]eCfoO5FӇZԾRb>:`mQ1_LNVo]㡿AO?kMxMiHӟVWyQؒWJTZl9'ji-+lhFI‡)-ꥒ)Nm!0& "2 jJJo#MA"!W!$k- ^-D8Յ .0QXCtZ) KwqHwk/U,|J~Z|irNjGjI3{ez ;fOb)Zrj_fmPXmG+u>ٱt>[ P x ["AKWC.*lW%ZmC``y$/1V2oIʪvM"[(6@B̳Wւ.P𯨳VuAרp>7\൰NeIgtTKWLlSYéVwX_fuM$EsL{dA|Fxyk3_ԋuZ֢lLI[^s[2&J6%2)ZX?(x[fKl J lJ/ЁI)jS4rɷ bi+vQd0NyOJYվ1#E,;Y>o~|`uotMą{kBFmџEQ:=_]\k߽;=T \:w޹_NNWoxv?_\'[ք`Ho}?KY6 ~W?0&Ts 2[N1KafS 6KbZUؚ-UɉR*LupBA͟v'Z/BˁҘ6KRM%f cKX[FqR_h%QI @1F0b^,+y!BB,MlUY[;-x{V<ܜK'\׫-5C6h.XӆҀqmMDtk 論BK6k[Wy$21/42i66&%s[e|Ӌo?1=\w.{Eŏ%Ydl}PC,@Y Qh1xp e#1kPvW.iMٺBzȑAѱ:3R]JAlzW^+eJ3R>z\VJtÇZV)jy"Jܫ.hEZk (oR1BXJ*/d ((T5bfg!~ `U )N]t6BU>[d>Ԉ-p+X1 2VhׄukBӪ_1}L̴} O-|*־44 w|z; A.K[uϩɺmPXfFgIv).(9vMݑ8 M3{Qn{=3RR<܌5,ZeR~@AF{c Zzh:%%4$0ovɒ|jU|Okս+o7]h,kFbt\+,![I~VS$FywŇ@H [px0uբ^M[gznX=&r KI/^H?7 L&}XaKn> Hڀ%X IYl8h{b"eY`uB5^>:Z@%wGKT)J%K0=\)/G'V[e_4.L%lr3qv4F)y狘`i)U/Ac&.=XG4 lw(p| $n^Y,uq y3N|;)̷|ÔIΑ(KM jYhI'2G9tA0FٯZ$1#OVY!֫U2z䴑;v(c/ڽ=VCFėtu[T\Ct[)6uUpYJ/Ŕ:KޣJXW3ZbJYW{ ϳ>(̂M)  @yX*Cy0T;u2{({D$e6qO'm6F#2zG2m2a Rk:THy8ip*[֔l#5*0*v_EVf+Z|ц!2zqCIK#6tszLr;?mݻa {>~lMrH+%H(2S moBg"%ͻ2U$KW$2rε9hUˑ /P˚fZc ]J(ȇC}{i#y%/| G"SgKɭPˑVq>,zN,cV 9޼#3RݗK25xPegJOyuS-j/ara77r Rs#xSNSI Bs4:ENxYf'/G\YbӪbuaك &mBvZ\=ܷv6=u`\ZfuK;E%q\0X}|4¢g=!.ǾEҮ_TxO_WDHϙQ9}b,U $pw/d eh568i<6S(ATF:Ѳi:0=I fcs's j-4[g}T(BS}+ey$Ř!W4d''wBz-FA5<5exS9vM( 6YNd8L}+R2V awk~;w߫>V8Ky^D J:"+}07ZoZ:bK(V.9s3&5XlA5n$G %@yP`ѣ,P.S*l~VHWah4T'#`P<[wߚgc)w>5՗l:!zL%x{4Jnk;e()#- eb:lWkqU,v4>Ufʕ~},dI5 6پ7W\ X֎jXiv9EYG:e1v֎*"1>))*q7flJ3vf ߸GܱJ?s5C#"Sfz4b2Sd;Ck,4晜@Hgiہ`Wj[?^Fy:ӳ=f u4t7G?^#}EGhyṛmgO/o=`={>pVBXJћU~F$!;6l x;iSS zs~?Il2_ o6q4##n^Rㄅ N5VVXm#>+pU 3"?ɕ>a^;|AX c<',3\c>TR'*I ,Vh6 7ׇkg۫/Wݫ7UU_׫rbʟj|W75|d'GXVce%<&hr:.aϊvAJ(i?^y hfi{X=ұ z)omaTL/ d4 B!B_ԌR3rRjFN5#ٵQi'R *IY$K"8]>Tm0סCt:t'X S@i~ɡ$/sRI1'ܟt}wdADVY#QFN- "dȴ g^rCd&c ^wc5y5ۡAVU7v{6İ56AmC̘E1JpW]/Тu: .ZVyITvbl%OpYTŰƹp!+`x'GpkӤ!(Y7nֳe)Qm7#A57U{ƺ0't?{ƍ K/PµJOJ+}ʖ ]ҒVi )rHH3 #LFhtHAڠ(jx"f<Q#Rhߌ4DўQE"-,g\yxB2VRיnQ7[{߸mfo\ hv Rkb4,)Cf5 $Q\;yQx 'u8-כ;IP3wb(<1l%Du:FHɥ`PCEp,}kr5*T(r3޼m Nu$yG%$J9@Z䛳m.(6 `c0k\5)ZTթRq!`H8gEЙ<:BH"҈k?fi@6NI[-l0[}qU|@=&&`b_ KEy_vfz5֫ˇzk49 =x-C̭4X՚@Bf=fiD@h j$`\BjTƎDy"fR(,Bold0 >ZMow*&qkPĸfBKέg{g8VD)OiHq- Q3:DD  T\+JO"VҜVsqi Yzѡi;Bzy :VJuth,Jj&tP*3%`A?K!XڧfyQ JZ f!NeMLDsީ?^}j᦮d*''D~؟HG,mnKAQpa3xd*Œ̥045ޒNsIp-v|ˇj3+grx>}E@>,1G m"b2&T$ USs[[֩*9[EPS]ڗYp%̠8 9S9<߻*CTaKeGH@CKTO30D4ZILR9\T uj[IQZ җXU" GU  !Mh\u v!o&h$T4F(6f&J$5Sk,hO cJ>^$Jzu#m#6L4K¥x*Muҟgt]ǹ.XzͿV #G^W qu1"mHRQ5HaZ wdYmZ!J,i>`o>ʯr Aax܊9x]YWog׮z9ywfb3Y=͍}-؃ӅҨWyXNsݠ-Gvkn%/K Yqpw6'`s Rhvז#fٝy&<޸.?Gh7:t~ h( IP #O+/C5ʸ A TV p:[`l5CKBhfhs^Fhqi[8*HG!|{ ֧-,v 0S8MhtԂj߀2cxUS䇷ZSjRMI3Tݣ" z QnRR.<:>s:S @b]/̖a>ţkAFX Yԍ{+)0 njSȂ Zp!ߋVޓB`rΪ t"̚1xuV!%|y8~pCOXP?C*q&PAqPUIQ(\tsʯ@ Ca@V?$M_OQ D+5x>{l`e߾BM78ihxhbx Q; ,!p|$1M``Cl- 1`8ę•@s Q)DˈʍLM%!Dqu<' vEEM%#aV%dRjs@jcE&BTRwkDy4#AyOL;ThR* ƟCΨ}ezt- 9DP)~eT~*L[A tdCq}n9EZIqxIIB7SPK]iഏBuwF-^Ql17@{^2+@XLUz<~*ը-1|"v^:XuapK5L/eZSTcp:/F 2`㗴8V%wKj\NJN p4(MF~5]>oSXҪ'q?JIPbx^9 S fRQz2Z lp{ d(](;ӕ,k;8ˈm'Dx"β0OQ8?̈́OZi)[/o#PX8NA+0[9`+W8FN t0+iPb6q%)J)-B 6),H Nm .\2MJ\=e .[&߯ooly,?'ɯ|'|U;aXtcƚĚ]FP;l0*z(L. 2W^kJ&TGb|U G֌ v``1oPNL˴OG$u(k~xD r=;_#ƈO\F qebkz7)ux1e xZtzT/sTq]6VZ#fA%eO/n_2HCK3GM[ CPb\,ܦlz~L۾|`n)FŖڝm|5nlQLu ] b 6ĄĄ_NI-Q.N-b*vM5'hTrn2 }%DR륋FdJR`)YZgytJ"L 83seZEe(F {Ir o3lqo:AcW8T[ gpV:4K4#x H{cS[M.kAߏJpyT^/y WfDg#J-1%rqgiEGh)t=C|} ]q)t玟7Cq݄=n_6tZ^HaR38nyɁ3sä]-%m]hBۢ2HISD8sm§ǰ5 ).xW~~K9]7N@яGGȾXIŸXxTL8>(jxĭJUU Qɤj6^FC/9vpNbkWL>G8# .7s &1Q5V\'\0]RĜySA)7OThB޸ؔl>yû6VA锶Ļp :M9nޭ y&ZbSf<~O|#A$EX^Xd4'VJ/ew#e*rHڃE$"l H;jC,<<^ߪ;u!<{谦/sJMuI%@(Go Y۳|>Rf㛻[w7?`'6ղLɭi/Z >seJTVQjxs:t#tӣ[TPc.y., PQtY%/J2C8$>DJ ~?jSE/ʨ:<uI{2V!X^aQBWܟCNr;bw``eD )a͂V />M $iU8ܭ,!̂T5?U^Ǽыkf&| p\MԤm9l(<9ug=SI}f%gẵT~bc~gK~`5m !^(iCD]wVI_N yyK 2!TUWW(w E-ڧL1fYҢ ]lm]VNʔCoQANBj8 uBF똫5xy{X+1c WslUIի5bIC{))3A%B[-C0صЁbkΖe ?-$ cԕ]UcDȎ=-wOzpstY}DL CZ˹%0@($R1a+9"́ vF⤌ g{W9D@+if J'kD?3v/f/w{ +oo驻H= и٧V_dz}uwq.'~X8kQ~#c:|M/Cyp3ualq}-YMTyr~RUA褊A$] X)o`Wƹ/ˁn{! daX@ؐN&#MuVm.+E7 dp ^> u7`­6 p)A/@A`З ֆ3Kftftjk/;aՁm8]4|]vYe~Gb6Hy_F'{v:ujo7~,*JZb SrǼǖ:@Tk̡u!^Њ ?|P~wv7Y+RJ҆ޝ-/ GGc%['?e0a-*_ӛs$^Dr/gɥm%7CJ8'#0Dz)t* EHtC'C[6ns ,Ì+C TBPsqkY L*Y<:1U@~>>d:/8y+U?Dߍg_?sǻ|pl{@)*/>}'`"71w/}qK%L!x0,42fbCHwcR1?L48Cgk)9I%yo/5K/3?,k2䄣᱊eFP0-w{ag[K}PqIvgV_7xbA JH4>{lӽ+pm4(a,'vdϹ`ڑH \8ӧZז@k{ V6|bAOX)  3/49?=s#"L2 眖C=Jˡ^c !Մyܴl=;H+I"m5wMuLvxA_qKCd& bpR֤&,0VRdg [ rZ%а,PH$*+$EP$6b@jM[.H8tP0XfXl~BJECI?{mU-P)Ն\h<<^' y&0^fmpC1eLoJARC([xոb0-Tp)^Oz”lӋe ƆҕB?LPR^ܕ$\5N93_!n mPZ&j)vw!sV>uGچ:kɔ0)^yi=Y[;&V|?4eRCMGgl3_ilOvnϐ7y `XSnp(tÓ]Qm1g4;N)[.2q.j<]c3+Z|2A Z-]f뿤/㪎n=YLqg"?h LiG182ZcÑDݗWU./93WJ  P^; (`828"Z5dD/{ Z N8Fbv]߆pRWvΏkABUR䳯Ͼ>:g,c-)=R0J`P% $xn"g[Èf)/mUC&t!ӄSwo!rFrB,kiư\BEBŘ 9! 8 6kTd SYOiGX(B$)jWď C*?g/k6X|wk%>>680v:胟 9#p0\0Bjdc5 *)yMR5ԒyP]}]AMjx4yGFqMobvqezLz8f!FK54*WkRC9YB)9m4!K8;:% Ehd J&YBMto€@[d5hV"sʻ6;XIB5"%4 %+'X}K/8eS`#F8\F~]Gu -IJ*Ц$]8 T܄\2?^pK@(]4GCd@I[Of˴"]lt>"'S j~c~A()>\Bx96 #eG+9"a}?aY!ȃ}~^NreE̢\]y7s: ;ZsްG\<&&xq_A|u:JT RN 1:4p*W9Ŋ3OHJj8wJ҆*9J{p`=ǘwnxrVD,DVw = m T 80R%&3¯ fJxh5zj3IΦc!vPPKʳ!xH^6<@a3qEplm[ـĀ >A9 0]@*k%P Dr  85HRCil";J >r.GËw_̿~kHQLZTCQԛ4SZzAqPK6PqTlոZӼKDfJKħsl8;?,_0!>Qp@d$#n ImO,'EAc܉?<_}Lk8爭1CcVo˰9C @Fjh+3FպH}F- zQzFm!괧gPn̴dA}brmm>&-)Zz0V=o;8SOdžMײUFFeӌOOz1mQ!2s ͳoHE(@~k\52LBK"ͺ>sm\[̥8nm㆘Tud1QFa{1{7,"bǓޅ'5Pl1at}x`TyBRKC r܈!z, 6PY1*#5!sʂ0.5!Lpb nfRpDC 8f@]ez'arHb^ٝ}wcHT]NƜ^:Q:/5J2J '` MY͗^ʩ\zf !^ЛaJRzJm]0# Jfd/:z_Sz{I`hWm59IǻR*YBIiH{D5ÈP9|rQINfI1#ĺ*B2>cΠB쇂g?rw6B&x)V궤yC;Z`G2"*K m_m3~Mld S˥-҂LXD_2Ye+ԻP x ѕṰw %XE^=}(91QWf:06~LW5_"Y"AB}T_7p3|EoY9L 0Cmn fS {u91|A3\}ujЖjHX[k჈(,ӅϱR4H-Jq|,yS@2h'5g4ex*8)^Ʃ-^͆hqSӒ7Oj!) QeeVXc+Ww =T5 ][tS)?h(i# nqj"O qNZA :2S0IDlB+&XgR4NK"6F%#,Hvg,8ʇBKQ-e  $U`)Wځwyk?ɄC+R+.vVhyQXfJ襶 ݊hY !"F4BM/p"ұ{QPhp^"BJNed ̴([dڗ-JQ {e_)MܲE8 Xi Jhyn٢(T)jU%/E3_LsT85i%bg#+ECs7˚Y F+BӁTnߓ$/wK]Ò:&PNFX(D, @,)@;=. $bAid!H -> H>`JQ)ʗ٩~^nz#{Ը,QC`RC.J q }`QEsR0Z BEH>&i]]ߕ'VUۙt3zszk2&Ph,4v 7<9/=i4ΈDlYx71gAJ5 3wr:4l|.6o_@p.v)@d-4 T:}Gށy:EDFӪYJڜj;LCNAOdeZqx%JjVf3!jKE7  D8:xwŃjD~-+:B_.zޛbmvtX፥_"i){w5DfgF-^o_-9LrFhb _d n2c&hRk/@#ˠ PhZ6xZb* i6g~,Ԟ0j׊MʁFTG`0 fN3ӊj:<4ƙ2uo?Gj/)^bPɆ67%y]\-0P b}|ӧuqyͻe\Ǩ®/oOo[_ޤlwoR;~[&"Y<b4s[/pإ=m utO}@.[xKGMR6;"ӛGdT͏e<j="N:Ń8}lPwVE><5RPﬖA}Soʹ|f~\|]_^#d ˘>:S;:{K7O xtͬmJ<ڌG7}  F)1#t epj":vR rnT/]M*cy*PLY0^υޘj(CZ<'? ; :Y_L^ j}.!'JʙaoimATM?}'#͐ݐ$Wk]x{ ]o㐻HC" 4.C f}d  7DS뼓IyN,ՒD*,#^5/ߑ-\oTS [' AnoLܴMOq(mz7y ?:Y30[m/_p%Wx5$H ̌Y:IN2c K>ڋ4k/ҬH:kF)S]H 5I8 366Bሶ.R+X¾ %xA+9SF p1,P#ZjEj7P V`I.Ղ9n=eF{J~wk!B##& j!ЊL/W}3GU ?ZYӭ)/5Dh$k$,jdMǬhXbʠ331׫~RT\_tȩ@s)γhYg!D=[Ltҋ#u8<&.*:'z pfsbǿ" jf{L)fE9 Ds06 I-:/iu `,zEd0 6ʙSSQ.:FcZae9a !tucJ% 6Dpݢ2)'GN5"CL>=xVV20tSZi|iإ4F) |E&;Viť.p{$M0Iȩ\ɓSƘqjܱ}E$ZKm *8.-B1,CX#SRRk'Ѵ 郳BZ|{BD%5̻ YcĕT u6X0o[W=_Tg Wʼ@XJIqRW!3\A0U$zΌ42P@kuV9F;3Rai:u]P\t*MIs;+;hbia48#p_;,5z4lgp8?ȁpj3W^7ś|ϣy4sTVmyjUӺqQ~O?!AsWCC F?=8'Tը!b^sۤ30/mA(mnOUȤTXlI!2hqFT EokwGGnר9ɸҨWj1+3 ?oi"zP>cCWϊrXĹi7 XO۴bo2):zSWzti&x{5$R J-'oS$#F!s!icL3zo(zMx-qӃas3Z*o4/zxbd7e| п>f&`F!+Q"M0#B5Ħ8'k>7ӯUnPvpc͌r}ITLT}J!1!T1.UkVyaWjL3i&1,7 R#hX;ăAcyή)@ϐK${tj/wǁ׻}3÷b*;YP-R˃cÝBau\J&J[)bں]?QxsMFrJ)#]=p;{#「L-ƒ]2hb*1êOܪ9rιXGHe)bzO!L=bEƄsUtKVqzTc;)ADӪϙy !) b2$8G&,5T-7 .Ņ.ϭ@LBEN̺G8mRՋܿX/ֽ@S~ZQ9Z^`˹rAwwln7:lYkGLn4b.кa PNf kA聾SM4S6iiϊbVhQC64SJ֞\؈~bUG {TČv'ox7#\)zp:((ET B*:SxN$S]t{`&$b'` ae'\.4[ &ELeNz6/S.3 9!)510VR ƥt{0pDd i F ,ئdn(Ε*3X2X*Mm.0-(saQk) `eIH*ābGKmB14rwTp :\R;KuՈUʹec,--(z7A^ ?GZQ s*۠E&F"301臐 k"&?} C(3P@s0c BSr 6E;֚K-UT%0d^R֛ʯ><'<yiw/U1#fw|7~tgˋ O^*)C= e>A¦?_Gp`8;0pϟ`ٌʲMg%a+qn=zqVwh.>X0Zq95#[y)Aʻr2,(q zGz'n~Q.y-jݸu ޲µqN.L82#;P^SS-#L^*OdRM 'k3sW>V52;-ۑ %;fQ1[F6̘NIxj5̗<+4ݱ~/nI0K_~8ɚٳ`M /O%"C5c6|ɽ:Me1|UyQ\1|Pr/Z*+B/=ȷV3lxG4ez9](~"V*n-%j.r3츇qBs'^ĥ \ 5~C#0:ǚ0ܨŒGJ}"2 o?Z}qz[25<^j-9?.>X%ĭbT24@leXgʜ*2 e"wB)S$)VXiQPFjlge4}W()+G'97j~`2%A)A(Rp" K`$ӈr$F;KYP-"=!aqMXUV:mN27N)a"vT*@ QXE>zHqJ,)/OCq:=j+1ƈ!4sDD2\Χe`Wb'-Q#P?Nkcڹ/T@.<5J߼֏u 5"hIJ]'ߣۥ#Ƚ0mU>̜Âb 544yFsޚ 7ЀǸ{yqKBP4,Hi# / aG4c88ޗzEQ T헯C kYa*,R 88= 1 fFUxr3#[*'8spLK 9L1ąŰ;ʘDIY53ֳC{3u5*e-i ذZ3}\@X0c&JRn"Y&lf5HPwJ)D-qV[_lۊS&Wl X-~JDbU‘,_+ f4 }(ig$PBWj^HKTJKT勃@desސ JK*h)"FUm*9;^XFja$i!Xy--ɠm/;g|tVb7`W*q\`$`w=: ?ÃB̒a! = rIek'ixHi xG>b+}6icM{(2{v_<pn]3Ͱ;4l|~|q4Y|T3ų/83;x2=maKib;gn2 F9Ko65-tκO1l1醹Eֶ)=z=k ľtC폓K w)n& 7QoȺf >= u28ǰ[do%L[EdZ[: <%P]X- Mة\5/(E=r R\;=WpkKtBc xF g I@Ix(b,~:So W -H4qIQ%*Z{[bH|H/Åa:2/V$]X,ST3*ZA^]! qL䡔Т愨aBR98հꅶT~U *a:p6s˚io$0agw{U}cX\_^:` },%OH[xү?dzy8 {@ҧWeJ/VakxTwcy ź|oai=KSJgp%O\^?lB޹_n{7Q) zDVAQǻqtZhlۻ'gz&,䝛6%cBʋ_HQ>GCPTKk67Ӫ]Ր~thW b[g_Kf՚HA`R |7]-ƴzpRtkDߙՇ3=Ui-+e|'R*Jw:A+DkО^ аiͱB\eڧ3ի)W OVd+g~ƪk\nP)*̛Lq ,/lqt%N+@1F+M#ENۆPJaJ$Bc(ĒmxuE_ yJf0CfA G[Ltl2eՖV-vȬg,GQ(ȍՔ3m]<mPGoisBҎ.D1x2[ϯ!wa"hahPz/EjJ7*o8~\ Ai"+1+0yCXQ=ġJhQby Y4ؒ!Y.SJGl"߸h-JpZ Rb' FvWg߇i;&!4ls^{_VP(sŷN )A*em^`|FTw(0ָ%{vOްc\JĜ6kR-hjj]9 uƔ6Z1JV1e!s3VMeI],k$X|}n_{`u}\7:m7gfgCJPw޽tnOOXxW,nYcq{) :RHikDSz,m=u}m1!0g?\M7‡XCCQ-=<ͦ]>ݹwlFA>sB4\;D+En֓;-EZ&83ߪɞYt[qN7>ZG4-{-VqFfB(YH'4m+6VS=X^uMwӵD>_CBƒ=t=kKRJݜvjl8DebF3.ٱ1nK _ #:d},YK%<sr,WhP`HoaPBg!X Śz  n=ÇK0,5I3bQqpm◼@j jо)09R7DiL6e5ɣ^s_2^6 TT4 I%m8#iRz5Rq&Rnjr3;k[$UV#"KGjțn}L*R{ɍ]NojlwZzIQbPoV3N"piWAcB[n|ڒZ\dQhj;W@wn%6%*SE_7jP{ѻbc:}n|O:I@s[M''[MB}nNiuNÒN̻gz&,䝛hMiw[)9SGv=*{m{z&,䝛6ŨT"Zuo.@=C1r$wCÍ82.ž~j ̫͜J{9V7 =}^QJ[_}E!@},_"E/w4"Rouq:8`;x xF%-N|&rzE-R(Eu0)1np;Z=A*fP_}8;0=_u}"|gh pq>1ӟڸp> ̇K_y; 'wױPq;5g>\b!::JatvyxAP Q5!ee@PϤ\]j}%,ˑ__?\3Sذ{IS:F%ɐ0ݐZm|0;䶘*-o2!;8*8&Qz F152`nЏ EpFӳо[F0&NdVY hm܈&Jc&41Nm xbt5<)WY *[-U} hķĩ2b0ŊIVR DJ %h/GW}ľ" /:gkU(1\q4,)Oӟ?? 19BM|G?ors8 oO?~@\Gsf!av=_V$pv{ 0,GSN/AE1gO ֞U0_8@E=Z]*2k1J'!K5(C릴X-G.]'R3bK1W6#E!~HL&1T%;#he3(a#IjH - #o S bHI=4ВzxwW` y1'~5182 xC9*PG15kZn 8mg~gOsJJvBN KO\ {0ʹBcp( 8c_`/Ga:cڕ$}}hg5f [:_*NןqY(x!,ĄO/1yi $u<$t7Ok7%Z8"pynL|Y>~`Gf6܀=t++5(~5R.AeC4%>O KUxRb #n1(>қxMyS0|<3UXYXrXhqnh ;i>oXmϫ<,m T7@Eo\EU>f)g[VLV賈LK6EyYb7;$}TZFNT/Pʶˬ :D&BjEz"c/UqA;<-q&9ͼL(UVs%'9oF`BQHj-w_f > 8NRܓzpB92\^|ygsu\A *؅GQx2Ƴ┌d{;xN$J ҉)5pL;䑓!`CBjjY MW9|33_)AGѯs*bJW"IlPT؂s)SERbQ4J `JՖ3^vwT*G8d_z_U~9 }=M3)LG&)$N')Ln7'F^$qr" ;S$3;MJl^ᤙgX5~Ym/gf3ںmҌ4*qŘ/] &-lŀ[Ǟڞv%eX~'E3dƑPت >QJ< u-CmH{N1G46P`z" FKz`#V#*Tu襘O=9(AHAEW-q>նM!#j%YOc 37ަ^,r0i2>qy]nXykPn} y@C%F8]FۻZ<,FC<4: гoge8A`Z .gN҇qQ8Djy#1cy#H%l:ˑ/ ԍ0 _M'a(1G^RSRoIU_jLtAy)4BH^* A":Ԓ2JzٕJO0%;-b>8Xu39-uz5l[w={֯Z:_&6DcHvn~xrܙNA3Vݹ3(?.b*YK|>wjW]+g][k8w"W8uotF{ٳ9yoU<!/|);Joug>#ĺu}I̺WxZ&f QsWsϺqViԡug>#ĺun֭§hm;ᚺMQVb6wQ|i(d.?ˍIM뼧xx,_=ϸg _[ 痛 ksfslbwa{wajXnu2L] >qnd9rTtMiǢ?_N*}f aNϺ: "s[js#9gn48xNsʲ*- x=\W [`“8|9yhx*yF!Ǡ<][SzνWBb>ix'3&7׿,S{5 DM]췱iˏyHA'"g,Φzuw5=l>{ ~֝{ /G.jm$]] "..>Vw'A#*}frnf}>'Zsseu·gU#;BjE!q6{Fzz駌6H u,_e/nݖCϫ]ԝ0TPɧUq Np.s\-RzlVI07cx0İ:JсkiՌݾ+tIERh_NԳ=C08F䚫><ײg:2zMrUށάL]ZY8\$l.~6EPE¼V8wZV&iBxTAν0 d9p?x G)hvvs`IVQ`GCq(Pfn<* ̴/H2,QMn'5m۱-+/힥_1xa. / TA1C Eؚ-bP&ˀAPȊBsñ_H8HeTRc(*y=;I+RΟ.4mqbฑ(FME#ݾn֖Bm/V1"Y=i2Zs^b G-msÖWʖÈ2.L%VA :5XsPP $y}9mh7{RS.@F3BbNm< b!/|F){o+_$^i"pe2['7_3OsJp+tI";ڲK=+و)j 0>^(#){A~NS3֓0XOz pPm >(Wŷ$=XCx1 +"Li͊4ÄŽHJeLvC)P waN~ T/ =R> $pghpzmV/"$^ #3Ҿu:=cviGE-b%3H.GƭZ }jZA"]t-a1CJ)9IE[o~EȑSгW6OG<>AjtH Ciaҩ搠 ( E"'HDi=E"+m?B5IDi dݪd3He %GTG^%iGDZD8`~ da6 .u o 8FgSM.tS\^:N|uɛ=h%^?$"<h[kiU9wl5lb ´B/2GqtΟ};b~Le?I;W,UwB]¾JX\*k #uJJ4gFr8]Op@%>]s9Wz9t6xwR**%f"Ho>vD)E.>}TiZA\ySL49)6CT)9RAg5AMg 67 I@1XU&b *[6Q܆؃0`-bbzx_ߺB8QOGpۛ]m3a/DF[ "[F/~TRoU)J(i^mM~)j ZbCD[)凒j 퐖~zDF" {B1E5nS rր@Vr{VUܶfY5'!:TO|\xp泉7My;  sLP\P LSg<զ-(B(-zV%t*u,쥈X?$r/4)J} ?g[1cC:%, S PJ5ȝN)Aa"7mPԠZE'=2lZ\"$ }Z~A^+!Mi fN5Z aI(:Cqprn\h ==mk LZn}~V 5mFjOk@5p>MlkEV6ר 9zg?=NO8 }̫ x yIm`,jچ\W8 T.W v'iS 6$0iq5ϼUyF=k0>v[ Aa5q'DqA"vL+۬Dzss㿾ۛOf|w[^j&USկǮ浫R `n*B!ڍcAF*0*8% A ҠbPcU.h) A 3@D6 R(rHL+u.qsܮ&!4rA\PP3"9ѹ@ʹ HL(+&K dhw`_^)GȆP0QijoAE2z֫Ki!g !Z-._ vG1AH~|{3 +0n& L@Y:M1(rq D-ұyWJ+'+¥eP.F)@ Dh6`MǽOoҚZ*dTcPIYZэgP24Á;ဏNND/D\@{: Lm24>LPL34l<2JUxe(y/t(33[:ST *s}E褵C_$%*_܎!eUyۃ0)LT"at,iQ\u yQʆAH( ZJ_MEYI\VLJl c}nZBӻҫOj~;gG:%vGJXu\=5x;J0J_wBJ(DFԚ_ߩd-j?(|qTw`Pe'h1k0{1ds??dwfޙ\@dz!9+As (4LNXBbFi%BMFn4J_w75x7T -q1 q&)fԿ,gL塟q1X %GI|Xdp[lID"(dLCL17 ,RH@,mdj$2&NM>;waR:9Pb yHrx. +-!ʩPK {I<˔&H$0BI`DV` 3 $-dcQA&=!#;A%2W]2͠M<>>8/Қf b9濙D٥jnj9\mTfpaG >:;!b3z(^|ć`u9H#<8Hz GIn)BDs1Lr Ws+>f<5W*έMM7{X 3Z@Y sʜߏWzzR-JkN8`a5v],x!BY2H2kᔶ޲SjoAB;#O:BWVi? %9G7! ϣg<5|Qq] rDA5Ñ.G^`ݽ0~.u9fch}{ -5l!9og gǥb6*[#LptGo̔GF|`ujh}t$>lڞM\G᫓ObϨ4:iݩt Hzfw|}S|W6H@!}&4\N.00|9zxa9u_tZd^(d3G )pH+@/,g9̕fYMa"Ø,FeLb#(W(ZRXH9DZ(+yԗRVlGXsaYEs@&sAE f7QTBҥ2T /i5@X$gp'֮rA1֦@`"RBA( \ YNdn1Dqsj͊xGޛ7Rۭ(:=N1GG2^.J2A?ՙRj;ghw={֑Zo|_1n7]sGXItn̢@eo}y7g%{ו}37I_Ay0w_f`cXI.3ϗ}'e^ŝ>W{ ]0QԍXj?#k04E跼X`şgl_++b{듀dKt+U鋛4C^R0#ݲ}7]iA5ΐH\UHL;!)nvY؆?64Ui7+Le07&@& ;mA vWg@JvWeunN{UZ, kWO)vBzoeu8dV nCZXŎg|9757\AƛN9O@m砺OV}C X^栻k:IWwȳxG>EAtz֧NϦhk w?Snᄐz:/4|:K1:jހQfʵp{[^wyww_zpV$؄G,oFaQF&ocmN6!Ң:)9x MޓmL~p`ap#868 Nϻ6>g6zCc%{ZO<>&~?o d'hfO/ΌU!B4JdAvMko?= *9p#sqD& =PV"$-pd/綌h{5WPoEt!p,+[F[ 'M)}%8x@1QT-8* f}ت6f!1v]@q`ڊĈ[Ih4l8@nD Y/GЗimyjo=t|wH ܐ-ٶuXWOt teGe{A.PZ5v#/C %]ݦ}2,yy_*ҳ"Z gE8!;Sc`e {ˉn=>ċûs>߼y??Gw=X+w}1Xeݒin xH+ _>k+x1[?,^kOoW|dy57b2hV󺨘crtAI&K"­5k&㤮BKQ뢳֤1ϙוhXZyB\k`iwdL>982ѥZq:i^K=lsH&R4^(k'Sץ6RK$3[Y1ّQz}fBZS>Z^7z1M`Jy@`oj}׌ancKאObzkZCMW']O~yg+*uqn=ԓo(MU}3ٜy%2 8Hdxgw0C˥ϖl'/W ˣIkɱƒMJa6ǂh t>B[#տ #s4 zC @?\ !^tzޕ_mm*{{7_n]\ݦe3w rkrpQ^LtV'|v,xr!vPALYh2XU"E.15Z_p4:"5L9Ʋ[1 MFT^|BG3 ]g"DOҡciekPA! P[8i/ $f0 r>|Kbtjd5Y͚|Ě'[^X;65!=δlHsHNɥGX.}J9{޵6r+"lHf,^MEMs`Iv|$gA-Yn%Hr: 2cŮXUl5ʼn.H:@&1~:.:V <)`&BcQfL0ElKʬ& 0I 3E$[4D[hWu%E$Jp2˨Ai)FJH;m› 9/:u\ fdObӖSbLTQR8MMDZ = Ѥ?M4@حocMP/>&Jui-8e)ԉ5R79-Md"/I[ 3L˜(D3mfN=ڧrՄ%Y.{X kBn/c NxFRHT'0QKv 0QOa0QS^N#au7xrHƽ-h?kwpF˼JqXɱ1,{$_ϺR}K>AF:EY1uItnggєfZ?\2d>ss&3~)O8b5ԝ9tȞ8dDtu氤=E8%xSI&z$09jeL0NMDǹmw";1!@'{bR~E_+3R![qqV=z4e*jW=񎌊.r_uo֯g~yuqɌ_G3f|]6U v634!cZ eLQ Y3p#2Z}s<רO%FW}6GM_nSqڈV0//❣'_zhb Uڋ9ĺ7CzRƹĽE%N8%2 /riY+'Mfchdn 5\*-NKu y"A+`WBdІ+" ~BdR R2P)&:*=$()D$'D^E T=@l0t%V/!R>ûQ!y,Y*khR2\YϚ v{?12JaYQ+f&*1/dU{[tW*u+{^W\FJۿۇױ.ac1rAqFjƥv+h^Uژ4]ݘaC*\?Y^$vM),7΋ыϟUDwt6gi }W~0 ^A;bfj ۮ9PIӰOjk@O#rp9nU n5E9Iw!/rLpL yhCzF-fĂKQz+8`b7dͅBh]y۹RgA- x/6*Te9 kQqb,R+eA[*ڳ!m7jEO4*@&CL2C#`-r+A{Af7y&vZ3Gg_1&H`ꋍkA Jwm_l݊cyPLrEsYFШ0XnDpX,Uy*#R̴ă9KI| \IՅ'Ђ0/#?/&QMe<F,A%{ |bpWdBi +'aPY0^[~e$ H8p_atN;-r1u}R͚%^<- A T6Ogw0k&h~ULυ弜ƺ-ͦ`T|hl:k1G[a!l.7"7d4Cf=d:l 1,,(1d?Z1+ߗ*m}qt1};2.8.M83|l-`YǴƪf1(fuUu"046?sqEXʖuؗJ͋~H Y;చ?OO1:y9-}l.>\v|§Sy5~b}̟. z 9Z+t_5Io>\{:$oe\_il; PUglo璐ܷ%n>4BT?m`"%{y3cȨ'>Z g¦YvD+1N2ÔW. Q :lwlf&]bI bttҧb@RbiW{ǫUq_v1˽<5SE_oY|tTH](&a[+5sqyBy6R\Եbh\D|@MkG5Aaќ!n󾨄÷R &lǜa02!v\i)axl J8ư@2z3WDVh4i٧p$w3bBrWqE4tVoH=$)E<|m#5m01w=tmŵ-:ZssbZQB2Rr,ĺQO}-:ZueFq'cf,cE'Y-%!=Zo;2aNR([#BoeSiz5QASu 2faϲxZ)J X,;mՅҭr&Tkژuwҷi.ktQiQT Cqs_`*fm\q}jf)C3pu) ߊ&N ^3"oQ=`Kx\@~N&_,f4-2y2v+Δ5Q:8TZ|1:.N-f;w2_mN%5~Xp[Ž bM6fnFO.cAꋓ.R667E j6 (>.6݅ƭ} IUC,&i&kY5Z(JϿU{UjCȅؖ~"`ѾX&>MZtk@o#T=En}@cQV(Ot(p;}COAi._5N` 4?IWot,n-^ FixHY $aike fRͽ}0;}k#FZv^E TV+VԤTv5Qyv,5&uKM)Pvп2َ|QoW/݈JbXNU &ݍ71at?{T=Ճ3f]^}H.Wﮘ$Ԓ#/UEjEh))r(~whi>oQ<&UUjP VzVF+e1%RjҳR:x½e+}M꫊ԊH^@QSаAl^vȹQYc~8.UUFNJXkY`j7 .HF4Q e(5@}10_{Z4ǟ򋑙ϧn єBKiBk)KтXAѐuBJ)}鰤'W%%@)06:%mJd#Xf!3)' /&60RaKEșjX k/yqd8$v,3z2E ;>DZS֠vqʌ7}zz RS E:4qmn<g6AX714>8$mn*5{ތ0JZ3KRCgi&aO݇9+Jw{0A_0 İvTtsg`.#ۂ}k GF-QK#9 QL2#Z /f{%vϸ3i&Y+c,G ~jMnXThI Bf|my:g\QТLm3O&4\"NRo, 5AVw VoOs[}THQKgϾ""tPTWJ \Oykv8nkT "8X_iDE). Cg;%Z)P X:ǝ@.$h, NJ#gZ'4b&1L0< S᱁,^p`JBZ0gUV-TύȽ&˼T Uy|RJR'ir$5](SWUzRHN*@t^#]o ] immCIsHk֝=IlKϮkFuz5 &Y>*PZYIz8TRVJe+SNN] bO+>xw7T)$(\Y]Q*JTL+}MꫪZ2XY[)O E4IX)OL^JSLbbxJZ͆&z& SV`s$sr&!PUV8@*N}[ΡU1QʛVp}&޽7 }Vڞ'$4mϷZ +=k+$J)"͹̋83!٘[ lU1%I9T8ȂsTFk.5UNkMchdn a6&8B:f"rx郹OX }QAxUd!`ub7>m'zS gk"iRZ*EҤ=^=?{׍J0/ pdXU$؇E v0ݷEeGeIxRvKnXK}] ڠǬWR.]ӯ]jGr\ΕăHԾ{Xz+ypO_3Kt}JY䍳U<5Zu/=qw~h=#c2j}wΓw'U@um[mOno_vbfG(Kz2X=Y~8Ed\n[Tf*H@rD۔3e .{e)@4ƷO=o`,Z#d}u$~)7JDf"~);riJdɕ?Jϗ~^h7MzN w-Gl L?8*qV4sTs=nRdz#[nmtk7jiv6ޏ°,?6$.>qus^}ؒ0y!"P.n')u窘<%e?-{yYWneUx0WC班{\o8bqr7ǧH6DKL߿(ȪԔ`e9b7$J\G;OTTG;>}KGIBg-@tywi NnuY*+kހ<\ޚKNLƼ^ӟg33N-_g2B;^g޳{VDn4w';:9Q1k?ԫr}:jiltW nY _~#F6p=S1NJ8n&X_!́jSMYؐ|?gcfDFXX&eed VS,k(08 ՇU|BQ(jp<620cɽqY4<2d71+>:dBz\aF4ûPnvz~}>Dc* t#XrR Xerv{~4ȒlY3wWqs@^np[г2-e}.i~/u&eRf]*\!_ͳ!Fr }+J6P!\Pks bhnRi  JU4gmia<ƒT:E=vJtTIG-|so<"[X@58_TSax=#!ar jĶ_X|E&Y'M#(48vOw>@d&/{::}lw>@0z R7յI |Ǐ7[*1/X;7_Q TI̼1%bY1>"Wޓwl:UqOo1FCbPrёdR T1)I0S9[S sC`l)W"r|ϲ0Qr2EG&v Q=d(h/eDZFIފ*J>ƧJ 2ZmdL‘Hs#Ͳ6x:F(}=ߜnRuL\F\Yj$U`dK Ң'k*Q[L=l,ɰl+M\|#9JQo#'3UjQmZc :I! RJ C }![m[rg97 WTK'v#1 RSa׭j[.巛ͨ4!8![3@f=l >?謦Y]EOfʥS82ȤF 1Vs,3¶i:FxZX:NT@/.bUruaZ*ފ~1GCI7Xg=7t5eqb9EciİuzF:y^`R@6 mXC 0/z{ EW!ߞ}!d >ȽI2Vz @`yM7 @P`EE)Y$6KYei-#l[$F @:iЫH/N/u݊׏UPdY O%$I op)e.StkQKӭ?CcT9c |zebtD;g7ݧ)}R{J߮H}Ed橈ԧ)ql\Ƭ@֜LVRRE$JjR?z9c -Z<2)Z$t! )A4.*Vڢd N1++\ +U";8V-L0HFX/OWQ- PRHK 2# ~ 2/(N>Vݹ M0)M%[/ntk^6(dcn-0Z+q_T{!.j|=xs2?[Ip~t Z–zwB߮ )[;O-"n_q?lj|lb5k)E ?53J"aۋeIiLf?~]80ӔI:A2I}$vIL–uwR?䡔នj(JH.sOZ"<'ZD7")%beqpm{C, &I0l&(zخɣ<A厗ˮYml9}˺]D&ONQk񎍶uVN3nkzq_g)+e?3!kg0GGKY'4-DNQ–Q IщA1wJ'lmv&{X^:#Ccof3f˘͝3L+US&E?~gnp?"KeBȫ%Ge8,A{s):0 0)`$,&{(K:Fx8%0<|h;YR2$aȂty eG"|eh.ԧ ^t;;J)I- Kz@d[Nd{ (mfhgkݥz(6(_o$zF@ J܇-@xe_R1PE32,Aw)P s1,G'Wՙ hc fTk7a#<jAC |1vH6%(vۄrU 88 /Zָƭ\ʸ>/5uʨR[[^%z)v&蛽=G,݅tS˫D/IE3o uVUܮp̝ Ȝ(`s*z\+vSP;rs}7O7̶@~|v@jՈKfŭ/+PϪZp@0EDlXCr,հKW 6JaKI9Jx7B~ !ZRaeIûYd 9jOKmٖ=VFYPua f8$8@K(+U635J>|>#,3wH)9G\w)LZ"uVnkIMJL CWU%2Y{|WY뉧jXq};WR *Қ%2lW 3mgFYZ_'+Yvewz1q ,VՇi3¥%;+ uV{9[%ln@gK$u5¶Cݳ]^^ 9*:MJ:~6'K75^V8RPkrx $6t-&t*{UπCg(9gwwڀ1(*h5# c: U Z{/4 6#I-]3eǾV'RBymQmWm0;(Pĭz㥎jG@mp[/{@Q|{FUip?cKH͓G{DZkKkhOk5ۍ 2g{nPf!u)e'] K6w-^ xOkvwzbԾ1vJbԈz?ԓw_\>ϱv̓[}5 egrB<~VgL7b7Ǵ1^Ƀ 1OFm5J1<l2Oqb%Vb\˵'jtpԂ%…1j -Ϲƾq^3ɯf+zg㥯Y {z\x2WY9fj}?u㧓oӉ,d0Y9M3 'BO82o:T>SC5:C%K&C cm3&_3N4Xy샵7cɑe,F1Jmi:6bw `H{~lfϕ*9~Rض]xv=4|?3ӟo=3m$#e VvLKPF [0fܶoB=)2'WB;^{d1j] ϤpA:$>=."/`bUlu);e!VLh@q%;y}$أR[WnUT1P[ .:Cl=_Xhb]42taUY iv>O^$v4uC?C\0-r;' { %SK Geiz XfHx2o!b90AA)6NVq4^#`6ֆltxaƖzF ٫xA܀Yklr[tvVpQ~3S;F8ۈ:ZwOTss~eׇG dq't!]TNYKMvrnu"ib ߞL=#\Lة< HUٞ.H 5 l{=ZCj" wŽ(@YJfkU uTdSmƓ\UNN7C۠ҋ SRP;=#_ ች&V.>Vb9'm$0h$gue6e6>pc}~$GNG]&aiv')gRP* oc#mx3ѩw}ysjz^v&^mmtp+:gt>|HX  k]rFX3:0Z~?gz-‰%^-pF府1t!.pj%6UtZH~5 xoyI:ͽv7_&,g{gC@2ћ|&ћ\A[*n;) 8cw5c;E=HNTdϜbbNYd&IQ Bv1bN'#3X=y'xv&yJbfTKAJ 3"dthjsYG:瘤zOJ_bB*$wL_e7ɑ}Dl5x5>"wf<&o IL[ y^ylЎ!i2~7{9YK%\INl[Q;k qjl2&9p j~JĻNrCH?S}}_!lOwԘW=}*6g&BrR {eXm3l~U么JLq뢨Kp*Y]k֠LgbTz EJ6V9#Yfds0Ʈdz#eV#|#/GEw YnCPAi&!s9eYDlOC)PTҪrq$Ƥ&~!5juJDW]]JThB(@%C&R-\k3o_= Vs4-u-zKj\c9hcRku:JI%^ GDNf*MKcW͆X-AۓRjU^ q{&G1*M7bw將Ee> "e: nEhNEAnEAt*НPz-Yn'>a#eF-Zo<ӪVzx6ij[Y6|ԨtP^ka~$ɾ#쫧׺C&#y6$ևyEw$ܽ2azj= ̒5_4.g#3Yͬ:}Ym^]C5-:nBAiגD}Ač)lŷ#y!zv󼭫9r(X7ʪ KdJid/];uVv-~b{`~-Bgk*؅^{[yz?qW6ݜ,n\ӝ\MP@yZ)rۊ&,]sۂ&n˸S#uevꍤHW 2ŚZ-&JVTo+.ou(X:yY0R˂)}?5cϲ1cd1bJfHʢCPSP g8T &SC3zJTld~+nfDc9_-n2u{=\Zu֤[,nzY!u4W{DHP_qG [7exZvٗ`s:m[88vݮs9.zmk8EC>+]LUKqC:k2(W[XJju7c]baaϭa؃īK 7pfc0MX2QvT"!۽y/W/k.4-E,H*=]o+ j3Bs\ hN _g1Zx%?8~b?~w%|{E(#:xeCqW-ުnZ-Uҵ5 q_2}ٵ5)9ii{2'k}6{6ft*mKGC6low}r}nm~L߽!D'Ϸ_-km$E"AJbAll6ۘښ%,3 /Y%[%YRUdViT<<yxH6yZ-M^Rrno[v~ynyj2!]n/7=ћWLI0 b}ir7S,F57w)"T-&]^JY=|m/yNh t6yUFrQ.% |}Oʇ(p[b\po0ΙA=LoJ+s^,2F lΣ%N*@Cl:MO(ƮotH::sp$oXjm $}t;+ąZ/Z+vRN+ P^9p/Z=q37RT9"@2fmNGl[C2ծ6jY.r|i:ݏ:_{yPO{>_O:!Dum~no/v=AA64ˋz>.rU5 |>MH֤mZ@]nUzݫC=zd|ojՀh\m3$Bѯ|od\`fmźhL9[|/֭rC՘jq[7`&hfmźИR̺/biݺ!W>DƔ%-1s$%,@A$3`eڱImt98{~wuOe0tk|*R.r%ZGQOo"7VFQbtE(oc~&m6EyĄB]]1:8iIA.FZDySXU]dpaf@޵o^ $B]v_g V<}-ϠPFlIi/ܩsr$hV,{Nҩ̭{tfҴuH=w-d3^@{{@H$6 .ܺEtfkdz>ݚK<2tOp`)%p`Q%y,E01, LY ] #mܥڍƽMsdžUPEUR,ܩz-qoXhNL XfzO%9"Fqf cZ+9z&-:}l Lj v>s,<aǨA-UF3#4hC":̴ Q-Eߍ6IU!DcHČ 3$0PE R8 q^fT\7ARr!X~80E PIbyb0,%)-8(sXF""!hQe>pØz  W lF}Qpc'ާqa[Q%'oއؤ JRiK0g9clN,l_3~2[ܷi|oAOK{R>F3f<-L"fz'_܍2ѓ^uz^|ZW7R+ AQA8.j.7- -`/ )We޵d=l1c!+{6ۿ;%Z]RO6kڠ0kwT(@wSbs$l#w"eQNw2I]q[H{f6ȋ0̾DBe9mRS(Q '$-d" &,U.$;~Ӧ1d!*y%Vhڅu^Kba6 VD0a kX#OE@aAta pI1\[ \"y>:& "uEmLރ/%!yQm~&b5AwZ47FŘkv#d^<Ԩ6t&kIkwΡ_@$i0H IcOYy GC*>2P@,R?Ks6qVےEj1:" FqtpuK'p 'OpX&8ΩA07r P%pj|#wƛM/Һwm:Gܢȷز_xlX0vD?$W˕ODNþNi#{/%G ko&9yQ#MkH}ztF^+S}j9%;1|>ε"q8%J\)l^ڑіt(P$ ."\`q#8fz1~݃S\#ʘLr 61ͼ%:iY*p"K p#YK)SJ,%)J#APp$ G&aGz*-l+KʼCpJoCoh0 BߠP L7>K'A]9'c8Sw5$00%CE8 N<\ E7,=li(9O@l;C[, ,ÝkD>ѕ7y.}PLеCB(wI!8'HH*K,{EƀfّhR1!]K{qf\۔BRU9)HNCR1(aS )r&鼓bUԚK`\JDHeT0 ,hn_S`:l l%c8 ж~N᛺H ̐m b̀ z:)eP`Ts3l*|@B -nTR6H%0}BVCLf+Pt4ГKy&SqRq>K~)3f~?hݓEj8>LUd&͗wL ǨJE~/tTzXoVڣxXwPR>tznTLh/FOTuIg5zE\Gxd VS.I6/~odlV^fT3CvL?Sls禘&4!D9KJH?捰?T#A&4M,*lc,YEVn *,zuVX@BPڵ]&ԋP4=/=?vR[pußĨr@u;׎R_w%,;ދ" ?zRx QbA1|EsA<.jCg߈'?RuE#RQF[u|'³࿏v=o.?Ͽkmz\?-T^X!FʤߍGso<2]Fu+Kkd ,42r_ IiNa- IRpOJ`ޅm?VŲ{Gvp}Xu·ׇGN@$,w099Cߏbv-1W*1?xx|ZKELpE{ZS XT[\_-~*8P6O⏂}+1>!KphbN/K;+e.5Ȇxɣ?,FrxTgj'1z;=%Lh. +&t_cȏb}"8rۦ; [ǀ &e없H;,T]2PiCJF}g<՗t?8}PWREц{Ub1Pثnh8Cz}jM^kzloc '$[)Vz"g[ ^ua2Hԫ"wS;xUS֔nW `Z)3Lbm*WFIRka4r5*xUMfKM{uZuNþoDHk6?>md &;-BKLs8_>[@3~Tg֦J$zVF)yF`IF.eJ}}TdP nz}??)=Ğ&fFw&]WwOMd1%[W(M'Gۜle/ >fR~2Ɠn6i k_p bT.珣yfdۃƆ UV6RSygfT-StmR!Okz\NYLәvfOlKr_S]~lnȎ*qFvSt&WMμ0\R.1? ƸGڸ(iXC:X[?.mFLֺZw Gŋ1 Ҝ֥ Ky78+>qg9 i ACa:CSz.8L=P8ThqO% 8j~8D\ s;ԳiVX1EX搓\ɽf3^j[i @1 DYQ@`*Vr ( b>\(e@FSDQKuRSWɳ2WH + βT"YJ4EH $A JB`8zhu,%Bu}$Su,`CD0$/Xbp&]}0csN!+`Y# !rq}RhE CaRK%LRm;UK PҵEj`yK',d#ܰQvVstff$ެgu/~]^6)ͰQP9=U6}^Kyc=ՊЃ_/Km$s֫4P,yFq%)!-K9% /1i0Ds^fi:n~߈?l8?OSkX;m0Uܚ*`TXA iAt3]+.+`FG/Uد*,k=IJ`oSQk+p ԭ|xV<johT@4f$RLfZWRYh)ҿPToCo)QA􎔡E[*oOUrjmikٵ;,ٻ6n$WXG/7Rru%/RaDX$ewFP f#NUl]n5 6t#̦w¨/N:e@HdZqXZ/ ~Pʁq|pǘ)Ur=m}XƴEe.m.ǠW^"]htG۟ E( jreh2bNZ-]TX#>]ۘ@'#H؟PmBtkiG''+O{BxVc3 _o1 p@ƭ[{Fx QLSS]_yD"E͋hn^`qF$LH-e &PQZCȘS[G$ .FѼ/(x|=:_, }HmhUon#W/Lmr[vb1Jg˷21X@r)hp,<7:"C 8!ԃ@ph3;5HF eyJ RUN(e_<&܏ ٘E@N`]n~ҫzb1NT 3ٙ5> O @孿bPN\wոG1?=j]7ŎXeُ[(R"4 "2KaXQDpRXt E 6,\F SŶ1F(VSxl"9Ğ L I0B;+G#.YC(ЭzRs'0K!c"oEPAQ_E}QVP$ed9"9XwZc8fÖ!x#EU E\u}5Ǡp1Kk,yV-L%kAdcbAL#x.3[E뜑R25LPIg 99XRu0arPCuw%E߽#y l#w??FcE)F!>9v™<9X^ P1T;ުJI呒'/Rwߝw]Su)XgއI+ q\-uu^ {ʮ$L]![G'21@QaG6_jDU?̮tT/K1)ЂΠ=i!Ew<^Rv0A"o>@xӋw_fՑݖTG⇛[|?wfMҽ2 Rq{?fMOF?^|֟oY_'Sؒ :g \ӽXݍCUFrIa̿c/v{ UlV&ZB=k[" }>Oev׳_aXdpz<` B6bgF=ڡ}D=yb$k;m!+1U3潖ڬQ/{sW"|_?|Se3HмׂD?wz-J C$'>.?WG~|--os F7ތzmXQ4gs}si]ػ7m~M \a ǻym{:^e7hi[Bw=aw.|/[T}@尖.[[\[.I2Uu+vuD햊A褶QG 2ԛvK/{ڐ.I2avCnĈNju1wMXnmH -<Ċ:Cwt,Vw Z2Q$I:9{uA4^{gp$˅nspV:^nS3B}=ItVgֆ@S/ AOTjHdyO5c{j!k\'LHrL7jRbԬ; PCyHH׼#,Bη{- "%7yju:,NK]u=imH b䈡ѻM Qb#:mnGBBFj6$䅋hL[;MvK FtRۨݎPぃޑڭ y"%S8RB|͏Ƽ2/T& dDMhW9e| ~%g1~qs %'8ⴣ+(djy+^k 33@4A$){BV+m~uy6vv~5n~y?;vt_抓-GqoNEuQL#3EFkk+s6nbQ2{|'˨qAp2?qq|錷 /qsw-qӏctsWz ze#CpkNlSmfثrG_/tUyQt]7iL8wL=x,!ilZtJ_iss&kt|Vw  BD[Gt?6G?c2Lr:${V%yogu*H G=;sXgfW1"Ӌn[%J;Et9!9SlkczcjǡxG<#?>6hXQirR$Ksx.0Zc!G <(cȎJTFY,=˭O} Eyͻw$/}?og+=Ĩ߶ h33q]0K4yҗ+-#5ÑR|0E9?]ݗYg-?w^Uqln`CT_ [&DU)ٌPA:>%pXe%ʷQ|H*%.lIR)y,/D cΓ&T:X3DC=Δ6`W;3Eo_}2=-׋ef( f)=K`ͱg(HOP5=jMzKf:ac|`!Rvlwu~soA3/YrlvYvlkOYZ%Uk[o$u ٤o k缼r5v) :V”+x'#X1ECgz悇wV2`D:Xϖbٿm{Y}7CI5$r̚-׷w1oVcitspb1;!3;fjoوl,1-̸Vx )%tHAn7늤$H~iy*hEsΡGηS ,OWb{O/dE `ZmeCgEu QyhcY&էY|dj~z?m;3Ռ=픇@@#O0 <ʋ2%'ü IvI@[dRΰj&u`+A:$6'x*D&dI(z`Peِlh #0w5aϳxyKS2*3BCk.,\ȥaI-T 149Xi-mDr̐K$D8' LcV4:ɌA Mm ֕/B"|31 ?dEj鑽Y/,_UnkBqgB]⫟W7t5 Ljˊo}(. M?WWo E3GWoF0e~(R/Wn>>@ :>SRfUa\U R&%%}1 r6W/ݠ[R($!~"zuNƟLnk:} *ݚ?CTv[#."+&X L{Ldmp᮷TcXsƒ:5`)$\FANN7ݥ,|Zqw6FcܰQ㐤9a14aJƉFHf92Nd22Im0\kEdԄ(l2(4t4N;,)HU2g D< 1sҀ :PQde^u aad0R=:1ef&TYEkq+()A6A$Q5"wұꥄ#@-6Dd@La_=PL[?EoΒwuN8M1=}rrkCLR"ew_+Fb%)j=$T3Snο;`+Qspnn/.}3-z/(n䛙6^oKմ n-_J^zCZ"Ȓ'ok9}r=@ZnΫ!<}[u!Ն0% NLvtG%@7iR#" +crB1) omX^rNRs鹩͜M^R2KI<$nj{f!6.=4띚9H`n[Wˑ0U3KEq{}Uԫ>gm\j5pqy%9WgN\^~5k}% ,/@ɕtJ!@]AR,]@ =n7<*A *;PA_ںr͵5@L T}Tc<%sm6{{8\ OPvL3|Nwdx?z md 6:*tӠⰐ;ƄbOszh%_?=RGl !u1y}v/6L5GgR1EW,4XΞcҸTt,E tb{5kNa"I81Z8 P|/M!Ry:P<;X c)kEau'hRClV}MÝ5'`ہT\G&VaI[ 3%/a20F!68mH'L@Z.>~>!L4̝4x6(X 'Ş6&i:8rIl7ApG 0_f mqlj3Y7hWP~Ho8j?5[knFyB=APn& 9!,f8 EE@"TGah bGi9z~MǴVOPE7)'iimLN9Z",]HcOQvpnl7-^]$qB&E,",!:ȶԗ12D0mL`CIRRUrqc1!˲4je/~I,8"-ⷄe[( y+F!̢ؕ*f''f%JJeSI[PNDHq/ȜEĄ#nr/l)TjzziK>̄z}0)Qϰ[ʯ=ja;]m7ffG* Ubv'OK=z|Z/O>n^\_c3$d7R(y}ų`>ZGi`L&3=lty}WG"פm,txIzYE%}pKUv \-11db 8λ-}FOz47Hlj},#,ł0JJ(O04|~ _*<|Aå9ߢBhZCXU7C3WkWd17ԣܩ(^T)9Eg1)* , Y"P!1?0yEBeWpDAr{#w3wzr7|yl>; b̈=cw#t A?o4w Ypefhl01e/Wuke_\?_S4/՝<5IY1IF^ yV ZoO~ xKǹ''˵EPm[}9BUpI=~.^\/ ヘS/5iLӫhkA#lHnt>GwUh9mbOs[Ǵ\cpn()y&1JCuh |a}t/x{agnw+AQ WSs8ڽmd 0T񼪖p#jçNCi^CPcSW5}:郠 DcQf;Vէ iO5˖v׬qG V/<מڣ18I¨9Cl8?éNcǥ$(p1yTDA>TR=w8 E&લSL;%(/%{۶tZKslO* s"=^f}͂/Zb=A6^Mh4 %yE9aʜ koST\a[dd-A$GJ%1UXhD$T+LJzs1]_JB!iz7MIrڙ̮f #`;2o4pY̝+yӈ1.\iɑk TyA[$}hpB) IOt< ްV,C8,BzpX[0迕Up7\f9 ȩzߚ[! :m9#>Ypi+?.u? -CZér &#/ _⬦#',.-ȴ nF4)6͂M9Շ#ao19QJDzc ؔ(4XUThs *z>. @IYxת@y!bqp 3H%A$"iu0N# $UJĄ5}2XabN0eq(GG8DDR'd eVC<7ŭ{~]/3ok?+|6.›GSas{}|y;<`䤢*%MeWx̅H'_>~aA7(Bǿ7s\74XvOL2 ,ۻGd B_jMP`<65jQ,;YM RUTT Zܬf4bMඦӔdÚ~k:07+c JX 7+eœ?W +eJS /d+e7)c`<}q l<%:5!udWx|L Z11n0ϰK*KUݑ?0^P> 'ru}j4 ?p1gLŁW5sxm$m?'Q18COG(QKb ɸ8 .o);s 9FTwMV}}/.Nx7ɮMfa+|q(EJ0j #jsAlhX St0y68,L`5x7\R^1 #bm,h %C\(@qcZ LPK8~H)D!x,TUU )i$.N =ɤ]WſRlns}B6$3PǙCHF KPҜe[8$@QJQd ;Kyþ:NqolK|#nyGR쏘\': UJՆC?FB)j)*9G|;3gWd3WB#Ġ- a:潫Zo`+aF&H4`TrV^! nmMgF?VR^ Fڣ38qx0a@~WN>B MW 80O?A0A V+t9,![/~cPOrqlmns(KHE 1 cK+ʓ(lƹ~Uzkk'\o}u0?E]Xi#Z,(6l9@= )f,t, !8Ǝ=BRO R "6d_t;q\Vr̠]Z1%2`mTY)MkqWR*ܬ4:-9d՘Xϲ?;yz>RORcF`ߊ*&>BW( QlRbE6gU(22 h Jh51@h"ێZƘ3-I)qq8&Bi\$:2#R󉙫gzHO, I`N1: 5OA0o ii(U#`&5VvzZZ"cn}dx֑\K+ 5 3hkY}qB!p41 6#\D( hxHqDe"Bh/!a72<8HHS}c)@nUHP3mZ 茒2bnJC'Dk$C[rk\;hSG3ywd=}T7GسWN/uOy>1EQzYzS ^MQ}ԽMuQm4OP-9?t&-wP>!AgWDx9\]:b4RS Q!ikZsgǧhngAF*bd}U |+Zm궽PRAҳraFWb@BC[vC7I:!'Ptuh :St){3eqJ~,>c}Ze[G"nYVvR\Y?\6Σ^$/+j-U?,xC NXxUIzG"«by\[%YK?2 iI*.~SRVU"kڣ58,. 1͝)fT-zmNy!qѧEy(Zъ`~ tr(s8%'+zNCǘ&iHpfiV:PZ'SN#pȉ8\*q6 /5H1)AK gI9M҉j B׻XmF$sgV#Ro)'V + ϲ; c 9]Kf\[uUHS$BJҘh2B2D(o o>=Jvf=F~`?^~@*RudZ<-@,((_ߍrmN'4B}=zz+\)P͔ߍ*axT+p%#OJ)DC|/Levx<9  f<9Pwoa?l=Cc"m>vowHܘ1)Eoӆfw-ac&D[v/hV` d8pEن[akANV0cb~!uMꜽ ,8́A@PD$I$[UF= W{mC -9q)sLΛo"flI.O}VDy-ecT,~x}.Cwޕ "fl9zu&;)4=nك|n NbkR8e?aem-bcR>Q3"APCJ4&1I Jd Kј K}#tBcsX!(J WE 6}' 0LdL'ѱU!@k]2u#l!S86겡 !.(#!O%Rh"]EBa`8 yS)Fb #3@Hooc+A@ѹQK !L$%R$q!HR&!T ƱJ)HpBCƉZEz)T0 :0 8o(dYCAJE_5wn~VvSQozsp6yhL~l=p=~o} B"$ak{C.#ʫAL;Yx]?~lT-\|YuȶlsQsm=W~fECT_'+J垡/'}X|hub:#kIu/w` hz>V;_]u^h)5RCMT>6~bNmnZxY}X O5z]]Ưl ⵛʫЇB^%ɺkyֵi8Z^QB3mrs?w9vwu31!9o8Ruvذp2RBЩ*m29^nnClƬ!f\6\Je׎^|u8.K>1em *wDG1^/94JCmD-wEl/ѢMO쇼bV>Iaq5m~wPp&N3!ߙZ?&*f0:X{[ Em˥ژN "wϗ<b<7@dv*+QYIHEL NFaR*pEVѩ*ִ֗[T4KBBz.dTWv+\vXl+"-nmm7sݓ)j)St LMeRWcFOUhg͗k:|pd&Sapw.陸~{RroxOKB&MKGzdZh.g/䋣ɳ2EKyK5:{wF~ew1H4L(OB;SqDrʨ\~yDC f[?ޢ|y/M[^8w<)j#N$@-L c 4eDH&kțViVh9]q!-cNxɪtYR50G7Pn7Pⶬ|nNϙ?鐃 %57ejrիa㫱p>@ W# 5jS/1j#L4w3ï2>o-O =3KѾDӛ^&ö)4hWcT"XӌFGY=G{r68.p ZߺƐD~'Zny#x:w>~sM6" wlt);QdK*q #N6-.<)>5pVUV%emw+5/Fqb}V]5#[Ih֣j`xj //(|Rm6h23/mj?,QޑօLNc2-CeaE9rH(J@k$48򌗴lrsup+r*!P`сVMr tvtOgcHIV8utԗJ;oS1l6({=P@Bwmj^--'+I~4šAcxCIs!aI2,0)m8T4R8>V?*fJkk7a s : L H81Q ,TT*HH(1Ga 1 Cϓ%ҥ9m>nX?MTه>7nw2}1g;qRLq!(ȤHHD(OTcb)N R~TVAu\%t,TނʍcfY0 (d$jK0Lbe76(RH$:b`kMzc5&pD<ܱZS%shŎ/;T0ͱ&09 4oF;+X]Шe?oƓcك#:VFA9H wDw;cc !~TOfzK"z@ia=^[c 7sqXLCiX8ꃻI.#s۝b)B NI =`:b<"UpR0TI@XB*@lcMN ЇU ^{~^{G:UމH+#"**^M_#63VrVѬ+xS"W<#PRѥơNԳc;5Ǥ]={~R bخ.x'$ @_#[},ݲ׫lr=4zWX*,4gpmb/3:'PsZ^Kiʵ3rG].1ddL%`j0 d1v|! ;\2Jt@prp%MTsrT^ՒOe[I(&%p"DIɀ0Ll H1Qqre~OmcK"3'R$'2 aH( X 9I$2 6-n3% .tAiR|$BdQqiLJ!IBn%g Wҩ+O9R:A`>l88ц>iS7oE)Aw%hx4(8@يlHT!u5q&w[+|;DZFġ-Yo6y·:q{zf{RCհFָ3<*M7Rv3f*gp3m'mZ5:As~oBc_¹nYa]of.[V3:'G]jo׬_19 M$EiyUi2>^ǖ_=_4 lgh\9օP="] 9!Ov3S/?Λ0U +e](h8]i溑KLZKr^5D$ z{_ ,dvІb չC7lw>)4Z;h x)ܐz↌v$"v;| g62} m* R3Isջq"p@E!~2jFֺ 0Ԙ תy#Ęsʱ|ǹ "ka {s 5nI8!6T|ۭH1$G1қך4 z,[UPpM\oY1klE$&솫GF![Y/qj1-+1u/ !vkj LSku*vpsoM]NM1 yh3I]4e҇+[Nc {CoZ9hDPP( V M? sQC;7X ?v=. D6o|mzP X}(s}) ah!_o)`g ;vK-hQcai1ۤ]e)KW¯[gVϝ;>Hi >$YI/%8 \s\_Z$kQz ~؏y06TsUg)⌵4[7M9d>2- ^ 'Srq}>>}_?ލV2Vw}'za?ٟ>5L1!L l^ѷ0GYͲ??]oGWr$V,,%w68% w)JGc4C e%q,rWU]՟o)*J%p:M mB̶Ӈ 1_B̗e*|Y-ļlqEH hӒHj% +Z7i<JOџdSK̀9)vtZ||Z0 0(}@HkVY̮B߇bw,K#Uc\W@i48rg"y˴H^EHVguqnFs`-p3ȓA&)H E\AdȞ]M;j4v*F"V߿ӥd,ekz?Nw$v;>,B89#\K\YArPXR+d6yRN8¤θLޟM`c e/ 6F)'lxD9"ca D[ۻ˹#SaP`FZ%![NYK0ʁ֊L4""@ΌiXSt 0 `E< L+z7% ||hv0f0u2 픛ɴWd(iL Ň!tJ9^KԈNp@Jhڪ(aIMX; 8o.G6qp"u9trpqP <<~<^H5A3"5ER ODZP,A)^2I3 ɑUz~D8hmp2~N_ "V{A>sx,&9}0,OrILΆW׷0P?h,Ex7 _FK]}6*Y29mft!F-uǖd,m a!;v933O)# Ysi/EjpU+?vGuꪎ%o&*3_| _SszO+$tI/hR܇~R:r$TʦjzE Ƙ2T=-6+߮A ZoL FJtcƼi,etAS|zu@ a&Zh-t3G^))EQ\lr ad b,xep sIڤcc:nN9# n9ۜrK 0R?礴(|z0nR朰@- OM#<7$1R)\o/ųȣVR:Jt76 &&q̜rN9'V#gzҼ \t PxZZMa %zdzJaK">0(mů.&Gp~bOI<|+r?lz^߆rzNt3p׻<-z&Ic3j18Ψ LZ/b㴔nĜg0Æ/ p^ @a<(ro(z0U%8Px^K HdyKd3Wy3(~c~j׃hi8^|eĽE ? /f4'pmXMՆʹOwd<{T{:+k+/W뜋t?#,[t\F>,Hȸ_wހ;k׆tbйR$4P~c=Xs:덤֩D/ö?a(R:a)\n[%NVDݨnߙj85mIrxz &vZjA6* ਦqrs],w|w0&nQ3-`:w3>`Mn#vղikc ~흮~<_)NP~дi@qG`MgGU>Nҧ˵< +UY?w_Āu\:[ y" +A$䕋h#yǺRk7Q/ݚb#:Mh})unsH3[ELQrvÈcnM1}n"uݚnmH+L2b-r>-HfLmq3*6^)vT35*S|;LNFC{$m)dm̟,} ^6_>$s,L´#Pvrmpb7?=槊joHuѓw%o!% .2w`FڻnOON0S%:Yw&C3*fx]J_@$4R~7oo& ٳ;6ln)Yӄz;FNp}5dJn ^j:q#x7 0Sɬ篵G>LigDzיkԁ-cgb|[EFUV>OԡbVكnU׀at>Ap v.|lWKH N~Ъ?_yf:~_:KH2C$j**0}4Tk͖&L{1^a 3\ lWRxTuXor"zOXVU^@o08b L9۷>2%Fl?< ʁ-Oپ<eG}Ak6}Lϖ^eS??Š-}|fi :oUe:7jqF!ɏ׺7DV5j(U0f)V,b&A8;bDbBU Zi` R\(#Y:1ue2OoW>e=S}5;e9o*\v:+çg#br`1C*/TKVk1ZU 'ι6㨒VNU[rGy68"X4t2w}?-"Ev4k 4Bv+})!^j٪mkV\yc(lGvm iJd~1%9B{EjAZZ輡(c;ՊJS o[0 {#mlpHgRV:ߨ#KDрc_ EX{Ng /µc30T}$UШY:pkz&zh)!PK84A KX+@́0c4D3Lz% &kʕ쐇+rI`"+ ;Vai53U! <# Āk `8T9m< Ms$ MEL:\״`$A5 GtQGJ<3xڭ y"DGoqk7!vkA4v;!4W!!\DWddP_.[aFRO)JU7FK`:0[0a-*T4E:WiT1+zdƌ6a7xSeCm7zh[B,B6@;_"  &%D (jZ=<Ս0?Xlam,s920v/SeqJ-ht4qJX>}w(T/=^%clQ7$_ )v}{#"o~2Ζ y?/ lxh[Xkbi{Lg/сL( 'aTP7KVئ#&D0=͈T]~Kx75ʱhTRFHTc(h'Vg=UZuC "+8@ʽ=XC2mBZΏ^m^.>`9|xQ~٠?{Wܸ C/3>o[3^;{`M 1Y % vGhH|_VUYZFUv\`gªs =ׅ̄l覅.BΩR ٥R}|9vō3bKqgܚx3mnbTTTQV*ͱXSJ RrUP) E\a CW)6 B:46<v[( 9'Ʋ,UAI( pGP0K-b ^.҄TK`'*],5GJRkH scGv~I- ]ؑvgّמM#Jۑ6xȫBC*S~uü01Ѣu GuBQƺ{9N k̺o>nUh\Eto,iV,Bgk/NjV,oXW=:X3Au>Us#us1ccE䪒I>Gze_$Z4O )jvVZ3p)482#SAeDS&iS_|dXpi%ad0]76K2 y R1< IEm*p:4R 2ZNRc{ bA2+2P,O&!R$`J|Py'8/tfң;8S^똦I,80T̝nȈ L'qڞfDJjK9 y>y?= .%JWs/mt ];! `xFQbFV twM;f5]Ff+.޸k*7=0:l׃h4[]͍; .wPGnP%vڇ(M0'JP.sȅHhiߧK~=ۂ%,ueL4@Q܂aMt9>O+y0[ښSϓ%Hb7DfXhF&%fnbk&^i &q*/wnq S>|us,v>WOl),4ɍ%r˟/"DvQ ΒOvdF) qݬxg~Ʀ ^\ m^U/( V+8rQů?|M[˟ " |"JO!噃 n>f^ZufH"LdTJ-hS|B12p$$-^,iSp4٫zċÒL{J#9w@46Sne{;9C 8꤯\in*`^3Ѡ/:p,NO ڵWBT9) u,$-D469:[k %m&gC4&c/ɼ3zNs_M#Fs 4Vi|_A|2VgCHW_JRCv1'\r"ζ ̆USa jhѸ p&=>&ו4 +a([ Yba,JJTyCvq~0WWaWCX+jj+!w"]Cյ@=@]I*F}=ZUL0r=&wXn=9YCzu+bYC=3g:OJEgOB]v4P` 'fq= uiڤV NBpƆ ,1u*hfb4u:4竆WT[{.B *v$VΓƸȗy+`y7ҽz>- /I!~{hҥ>g_CjU/I;lNH})N׎v/t"sqS~/?FzOwF޽ynGb2:N3KLafZ"TT$xK2 a0EGpo!~3[,@S|tk^# 7pӇ͒Oıxb!YK5rR˜Pi8c)qY%'F9ĠZ|s׍AJ^8%I&In/^{40&Ϲ:/4v[Esz7K- @(#=trF4瑗rdb&.˸$HR(.DaY:AAfI\lPbނ=c-(>6#jP1uZ@4q!RED%Jf&(f0 NRxxŚ>|LHjf#Fr TqB$f)88 A( R,K'xb0ƅ}S5@Oy3Mz7l98}|[r\pG}x2ɻ d>+8w!N?!)QgMYuumq;ȶaQ gӣ?c8SvI n)҄sy=S{\y =Pju} D\mg ɭ(y6$9XV"vhSn5ƏT$?5@n R4,!`ȕ}eV2;PN8yaJ=s4Y j|P(Y"e4&ed޳7u'O~3!e(l`aPErG6w>N m`6٪HT9x{m|->Sh{(jb33j03$ eL,2e!Ll֘'l  ~3S g)E&%QD9e13HH@,8i4Va-J(fYwCU~X$H""?Lr9E~E~o> ʅHq6sWFCc l~%gaҾŁ:X)ܝ!@{覘2$-$7ī^]\1UݜlyeW_NVv\@I`ԼfqYFOs$[E{W>O݌vUry23v4[ !}s}ozS(hGW {TU.K,Y6d0 ̶Y۫gCdyp#)YDh#ID!ʈ.t:x筰?z Llj].'qTA+|).L _SOC7GR/%AV0>(| O_8=b.% Y bc:DxUH8YɪXO J)YLJy¥ &t&<(aQYBL2X*EMbb(B"bI~D_ݾkn7:l6cq41@}LnFYMN/L9{ױI3_!ٳ#xd7'ik؃`ؼ B2AnRb%")bp a2Zn;H_"*OX_||לw |p!>agY6hwN^=N#N}VYGvݐ<ǞKĔB4+9JGph]lLy!yy^ @Bgel:#g"~T0bPJ8J |56dS*(18 eeTʼn0LQc/TJ`ۗZPk͐SZK -=%ݞԊ -rꧥpo -=%ݾw䠥=Rj]7&":{#50hi ?-RT(plSKOI}/5&BZk-OR*:ii)NK&RF4F|^k)W~ZU+?-uR!ﳖuNK)-=-ݞ [3ZG-%޵q;e+2|4q%60\noeI#k?JlZ+hW9 RD \h^C$U0.n7CbƥגU}" #tޜFp]>[<ͥ t|Wί 'wlזq҃RHøRvVnV?&BƥVR$øIH#^""4s$p/0.ğfKJ,r!s)̠y.Mܻg+vI\$Kag+^\z\=qX.e0Ou>Fas)!a\J7^Z/8RƥԅbBOi9e<.l|>Qˁ6,v cqXKМ! >U'7qx{̭L9c,%TT[ FkIWjͽCDK!H h65m<>ӊ;1w̨T)rWI"PBqey)Y\F6FȺ$D9J{)iI@|c`y\L?t%3%~l%=tEx#l4t7a51K)%&(^[ o-CN@BjVX2l%aBdl]PMI *!j"X0x[S4rvH4KN \@$r@fqe 9pH$̴DWRRP ntyߏ{SXDŽu6~R{=[޸˓Eeyͻw+䶛'6)\-2;;A%2dL[)ܸ/gyN(PVځgTǒ[|݄Pd>lR0^ eJB.t/r̸̴~6h4GW/Hǁ)(w,ӌ?0cRgmg2zh3_^K~4z;$\~&.0ʣӣ?'p h3Ȥr׃ i/gY!k4Wj3@\I*0ϥxJal8#`*?I^sոܞu:hbz7gkͧhN3NVzhrߎgI<|wk'?õ3ѷsiXL8QfhIRӒ?ڦh}iƾinW>|Gz/|aܝ#]?-i? ׎/v%&_\?A~S=@2ҕo{8"Տhl.ߗYkHn9 _o2?Fnq+:S> ାv|>L9c۞6o. ^u}:[@+Qufi${aYH'jl Kh8$5Fg4ljrS<镶uZŭZhV-gd{oe+t,oʖvf]&G!& Erf !6 FylB{}ku>J8@Q QbS8FllTGC  rBacRI-$͵8H0!a#5P^ 5E),TA% 6KB^SVl# ",,,,,`bQ5^]ze^*[ZTíKwXP\hYBaD f9ֆ[ՙj V&ZUۿLbh@/E-(0O&h\xz6!Hr-(풆# E0$h1;ģ)nSֆ HĢKW4eESV4eESV4eESV4eESV4e=)k],hڇq,e7TW', I¼GKS" Irg!hDHb s1`5er!vs]=Ns"IhpIKQL7Dbd[ LJ)\;epb乳lW-ëYVD~ A9:91r@$ȹX3ŧ .Gj(cF1x/ׯ(1 +TL"%d %VBiEyx컉Y ڭACL=bbʣ>—jJO9B\i D,JO6B3Sz('k.mjbm0! pIR"<8, T=Qȅy Elqw{(njK"*`` lA:S5L~L~ۗmێkX:_q$~x7֨ff1뚵_q/g_8ξr=S%"y^daX8W"S&A yn"PjgoΌ[Y<Ս<ɘn Rysi:TvnL M$1! 7@i ɝ}zvYpQ*˭cjv26ò%P3fJUoXl%zWo%ﺓO^ݚ'e-/;mL]#WE{Gn܅*u#j;kD6q[n.ʒ@H 0GmteMyڠKzz݄ mw$h q$[g[ϝR)MΈ@>U vFd}vn߸gK%qNsk0VL$[nSÇ'hͤ6I tAHl4k_98ڂA|c`,љRϻF9{di$ %K@*TDEfQ3hli+\On';6}rRɴU6dg-J +(ȨXrN%( A26D@BZf@ɕ^E X{ͺ7ؗk!+ADT$"xrΝRl( \TG64g5 k|u&<\O1p!=`R C aظ;":M‹I.ݚKspK |nv4|)Ml 0+U\0%ಀŏA!u'`<|1)sѩwFtR.>o,b.+{DjsREu*ݞ1/a&Ko>;non%'W4暏R3,zݾr޻ 7[ 1U0Tq] )`͗ Tv-뾡> (bǗhzThK(hΊV҆myM+GGa&\:\i-qK5Tʅ10F29$BRND!zHP݀Ԑ^w֤lG r)DrFZaQFEfpI@hRS` 4&lْ@ҊʛV}iUKf4NѸV [ O^nԼGUy+(e $~=q@*ŌOgCt6JCO=Q:0JlZ}N;/"{Ӄ=fs(˱ݫ ^rLqoߴ yq ѳ7\!d/n$+K =#ͳ2xGA#H8Yj΢) ͛!Qrccx(FtuR\ZN/.e0n(NvS%I*@xrfB}^j- >Aw;LPp %P`LZ*bOQm iM"6C¦s%F(l((A>Л C 9 K`A9 f@_8/*Մ @LהL"E5V F9\4GRs[+E/r6lV m nUP>&j ]x} \q,ҹ«D3 <$4 ;]3x3Ę\߈P쐩ĹAԅ$r+(+e6#A1b`XHJZRb 䓘HRXn$J@P$ IiPNpuv HY F*6=ŭ~.nD}r2 L蒻M@]rQ\r.ΒK?-o[ntˍn-7F{.qP x( `# 4z冪3w -r0F ۇ^;da01O)/:?$xrY:\/w!ARIG7*9JlE9gNRf'V&#^X?ǃɉUh[m8)ߏInWdr oى9`dL^X[J~~u 0 Ϲ;G3A.h 8sjT*( }#5O;kIw\f>b̭\\K *1ця6%6"F?яb(F?яb.ѬzY.F?zGlKM ?jx2Aޅ! ǪC A9y<4<Cxh#*ld)C󝟵Zj@T֡d1.l&P ]x`4f_1N^(5:F >ODdL#Ygx[cޅcD0Z!F>e1#Z.~tz H[x]pi90h_?N<'I|dF`!MdIYt H뗢~)v:F6Sq Ĥ,:҅cMa Z4sIyR8*TKW z-ÜN뷖xx98doa/(+g6٘m a6fcBvlL%6pLLfb2d&:&ڽ*&e3%uͼt&]ԛX8 ] ClXz.m'[d=sa=sa=sa=7ڄh@4璋"?Q劖\BJ>nyB?b4q;8me$9.갪Y.Y.Y.<`jp| K. sΤu$Xt&o8g `BHF:OfKaQ 2W(ђנּ 穱!N[ŰlIvo2-^ծ/Cuax1mE>FYZODB*Q]E**z-%DemVi=\Ã"5N蹓;{I $wr﨓;"aivé搀F HD:ɢ!rt`h$`c؁cvBډuPLZpX6ugxD"dM=~"myDHn o@HLY# fp Hix]L9hn'[I'MC{YXbO@E#i6;ptBc &in޽<\|l,Qw>-{7X@ D?gi9}ECY+y {ius&[pל_^ܠ7iydᩝmvZ :2 /×qg.."sY%Q/uUtxҿtUp?-ӰwgF{hpWL!N HȤHWG5 דx0NꙌY;l|@|KF2V*P*1a\ uG% naG JaH`糲:LKOe&el.#KL۾Eq\]}i2-[5f y'+qf$'+q'+q_]AP ̓lA}Wdt^#=n4 [t %Vr3JdFyPy1$-Z=3etu}m)ξ'jw/} g_8ξ.*PlpWf?=jNvXZLw4 #\DI:Q[Nr،;.-Q_ %nOg<ܛ^[c.OmpKXJ &zn{qM\YhDE銈T'TȌ,F%bO5F*,1#GCoF.҅.,..Zg:}u=%a]~\?ƸRm4+EeXbŰ*óa:Oըƛ< ߨ7{{b- s#ogW":2EEܩ24^zcT'TmVY &IoیeKfb+D!VY@) RQ8RAJï|Pc3 ?f1o V:l̋ČHݯF>h}K' I^hf䚗Bb/5}楐1:F wHBF"GDS[wA膡P.Y,!J3}mQoP` ax!Ճ ?nZ)-hQ^xˇ{f4q6T*t%Դo +M7'9ޜ j:ݟ (IF5(bjx񞶴=FxOIiVXx{@d@b jfJuje%ӑm^ ђQeywP궪fNUdiQK GGx&o  62ߐx>s퇙F^|49 _*׻;}PNK36g_$GiY1<=y5H̥/O'/O:^V%^N^?e̮[̩gާO>П{#|>x;pL_|cN o%*7Q>9?Ƌ=G`|sI|J?L|%!ut[|;^1닪zV?zIS?o]^\u/ =gr snf_z+y tJI<' ٖӈއ5`}]sNDI~qNO+ k,>dON P':9weEF5̲ځ^.eI.wWG-糵ǜ vK4{P YT`()_jǮn^  bX.11j ]J!+" v<7^ 9BbuRJJ!j7hh17l;߾ZY1Y1Y1YȖ13tAJROIy^Gϝs^*aqpy)GlZy~^*̮&Z!y,vu{n% B)iPJ&jT<$jSŝ7Rĭ[j\BL"TjQ=cv1+A1*b(S>m. sRs4T*=$mr1zPJ#p*tY~%S(fHH5|v7::(As J8gI 8 9,qbZ3;NZV( Ο'ja}=D_/cb"t_r׎mwhi=wx]; ,u;vy^8ܱ;vp;rWץpW)9[vC(ԃRlM& 1`RݬYdxI[!w&TbO`!1p|IZ+QSLm.D׆ɵi|\ՑOD΢]>^ 6d(Z.xC,T0%2ۧqBeok 42خ/5ڲRB@RJ>Q;RJJ X;W(%ѐ.6w'n%Ap-/бIP֮׍y@)EPQ;aRS&%5 d 4qU} :RZH1TgP/X%&?Qk0Deaq33$H6d;7 S ee(je kRH.`.rmH&bRY( $&Bp 6:VQnxR[k2eq@ ekKkA8HA䦾yBh!Cdm(#Ԧ♧1Q{YC`GI r (E0#(R$t]*J)A*VJe#ףF $;Ugy_J,L;J)&QMP3(uRRg5VPd|sJFMu|˴VT:Xu_8^t٠WOr=+"G~;9РޛgUzŤS/GËaŽ㲀HH7tCKPQ0*J uJ`lG{W&je~<+ vC  @KՑ&{v-E?wXr#IVoj_hN* 9>JTJEz}mû5xwVˬ_D)*oMzoQ- ? 竞dK,]Zrٮp}h& " -AEW3I1hQHr麺Q(2h6k񎿃?~aaɧkvmYOC=3 |r `ɑI7]aƲRO- 0 S!h$8/2+ Rmvkry"YĹU4Vt\-ӎfI瀆Un'Z2U- ;ε!AǸ3H8LqǻQ&u9oMf>21#w|d#32XC{dBehx_c7$E&t$Mvl?T&Аx^tz5+dl?H Iu X;穕% ]v{sodlȅ'>-i3E|hrU. ww7J-fmX?/KtIe_̊ttɫAJ,%O}/:yyf?{ڜƑ[B-r,Z{6.0#MXQo_q*Pk>O9ύ0{%Y7 j/?W|%::Q>/GzTܘqH)X6χqqc:tۜ46[}sJߘ/Uf+WOvElc@3T9KWδǃ5o.1t!8zh6BUʈBMF ǬPˏpJuWqSX3N6&$ ٹ5ggyB4 uؕ1QdyRe\g[dPާ%RSxbm&lѺ, 'CZW~^`)=N-$ dC Г1B! ٢5 )@ Դ:_"*qok(oC  jD3@B CO,\L RDPm; yXmx2Fb#AH'镸54CHQO5jPҠBbO qhtl!E=)'U^m͸ JPz5{ch4{Q.,(wbk6t @;,I. ; n2_.,!m궒"oZGFH3a&P"$orR@I`FRH7P1 )ee/1rrT)+ HK Gpwݶ@ll1];4Q[!G\b1>vnca!<LD_j̋R;(y4g;}x=3Dah;bk[BD9SVUN^4}$1C%c~,+y%/[MWz w4L^rq,cqPIJF@ 2 DBQ"@_J+ $$9#VǒtWׁR\Cc3߈gszaA C2 Aij6qu!ت $4QsI$MK,kamx@`mMUCzNP!H\ڣVYR )PMR~\%rpj3:C\um CkS#>-ꂺQAAYH(˴!E8͸V-V/ONZKqng8TvT`4c4i#I?e+os66 gMa}{$l4?}GCG4;FHhT|Ȇa&>DLDOUod5wع.~k-֝դ6?.ۿAD{vO^͆4%OPx @ q%0il~rшѰ Ѱy壡Iݛᯚ04S6Ou֫F)6nⶠi҆}Ll/Ȯ9;16f cd?|\{JڑsHTFXֽm}p7O!8T+B#i`dJ}>7L &W ٔÎ%bgJ<8L*C!*h力V+,SC乀D^8Y#]')ʮsyn>8'߼ h>;6-A E{ǟiQAQSXaaj )3@CS~}m90JQ26Nq&22 ?jV2z-f˯>6{ w/y.u`a}J b2- AתJ8\P*ih+"YKc* rHH!U?͏"x-zbH(bH6<+VD zEmz"Xl:Ԗ(P㦩Bʯ<-clžG#AH&!b jRSFTS#Ȗ@GC<I`01gq7Y%n0Ֆ!(Q6_Cļ0]lE? HTM6o 5]SQ0d 4q>dc'zsD&X4OR;Ӫ$|nwO]ѢgqV?,.' ;^鸞Dϧq99->OJ=kG'y=:IQS#ZMWԧY^k:.Fѧެw>5kӽ UˆmnAE7𲿶㺝/\h_b;|߾<4ygڜOaYI&WYI?>Nepx=.(iQй~ػ&n췇}W{zpoDhYK07{oşB=5@ 7~iʹM鰀do̳2=- =e*u Szq éjbe 1we!A֡nl BN!5eơbC>If\HEDZ1(Z)LK#t&m k5?گ-⑝OiH=3+T4Mdziut=>zI@]8R`zKy<8Zaj U2], &'izK9vZa ?)ذ6!h*A)@ "f$Idߩl0uY#:4kXHO%mA'E‚v]^ pl={jeu"gf"NUG\DAq@jx;O2MV!0$j;Ý@:w]*!V?$!ލ(zt~1_ZKPݔxngJ- l(cI1(]!6qa'bS2CT` lPT`:DxoT=!>W׿DH"BHCk` b%X/֌sл>47L=]fRä,.H_Z;?uU#_w~] P ʸgT3c({`+Ț]@R)f-k⻴?*&~>_~=Ei^d]}g)t|G+CzQh,PQ2t߿:cλbK.O=oޞ? Ϳwqin7ܣ9Ϻ-Qn/щJ\}9r1R"lf} ޛAd9 ,i9߭=EִS~c~bvqdS+vElc5U+ $bV;}wљSZM8QtAH'Zy:zV6УsU䣴2bf)JkWAڅGrj#\5jcͤ =(I?a2..f}1 }F>;{*>;^9KSd=gI>}l5'*澍DCPS!]1 ޹]*Wٯ'o:^E 'kTS~H"lT`(D^{] YPPJK*û%^=[cn>vq/EZNl͆n7{h%q >:`ZϲeQua 7mRQȵE%B%)0 k,dD8V H9tgj}_L\{Ҧv3*`i1ί b#f麮s'T+X5-6K4;D7[3X^łBSf#}ȫ"\SGs}^|ߙ _loF6;?>Y/68uo5*$K//`p>X4 _,+NmWsRnOB{M%ky:E% ?;cfiB}E,bAI"Z_) $U0#)G$$*7YqQ\TJlY^5]ol"j4=' 6ıζt`I͒=ClDHc^%lܪ*ep(k*E< WJm7& j*E<}2)hCU)) ݱI%Z*&UKUUtXPUz^l_(S| RR[o]!,`YNSU)ImjaPjRDx8J *wUJSRfQlUY^+D*թR[⍐ؚJ τ>a$'2T[o*%<3DII!!Dc֦ &5TH!#`b\J1PY aPnx; VRSLG0T _o&[*=Y U_`mR0Sz Qֆ!kFC*UR~lHWPPp*U*E<٧ "OkS`j*E*e;B"a#T)yuUVIM3U)驵MWVϱTn'p Lfcj{X_UrEHXJ{EhQsu5/l AO `74I)36=]=UOujeo,r+>q]aX } ɃfKsUf-ZՃr42wa7{єF{Ӷ74`{Ȯ{Vp"V7NFb&܄svnM:7%״n09njb5[7W.ẠnhJy AuAf3[6Ĺ|R/Jy;Je#P$G( D)̐4\ ^ lCXmXgCi J=JQR4ZN(:!Jl(E !J 0p0NCA)ܟsLCr YB0NS@V4t i 80sB)ΐV ,N8p jJx1z2 'Y@) 6 ԋR eȂg $p$pڋGJe ZXRUyjIu EjQSŸb}jnt6QOa%oFBw$~9kszs?f.6nZܼ>}_2y"/^_n- oMz|1\Bl=jj%<P}(tq 2RT "V J(0x\a/W(E5В akbA)F`P@Q;4A)X E*.|`Q}?JH!FX Ṕ,/gAtPsQA@"t9( 8l-Ao 1W@+X,L1CE8!r2(yN(%[0jNR/JIY)ۆiH%0$.LSEG- &͕9U=pXnFM>?r#,7rF3;kj 4Qk2DK&,=((DF,;̔hH?sT*F=XuHPUVsQj!T"FQ)LFH1`"hiڭC)֥fJBaWR)TК}by&FTmSyh ?ap7peS}``cn 'd[N{H_I@;2-qgdJng8rJ\lv`,+ ͹R9YVʀk)4_6J-6QseYeA$ ^2~ޓ ~:ƻ$<8 b~)MP O$ P*)c(7+hI/ọi쑞šd*EB)n`Pk/ݟ8j˵K%[F̏myg %u;("|uhӅAΧ]Ѱf7ԕѺTZ[ޠK]qeҟUu*{:x. )0ʥrYEla zLQaBLfLcd+C *IM<%0?FG]<#oX9&.ۇJV:\fiAH en,DAXZ9* E1 )ФY)"O91)tm|P`$xABA*A5 g@IԘAitA] \L7LaJ8sz6Kd&2 R )xPgJ$0 }^E׵v$W ̄ ̄$2 R0U]bJăޛ#"&z)['&ê4"c=~B8{8+\-~p3;,GV]8"y0A_I%Y>M>Z m hqQXnrm %PX j'bEXg:r(*r(*r(*NTn-FUB*re%ʥ'Kt[YBF y#yUyyVO/W[ 1S{0+Ɨ^簞9۟fݷo)ӻ]y3}հEz?u~^g ('rπL[RozjoI֒S/5rsg^,s3g?Wwz3؏r4OL)Ll % x <@[K %PEMLrR2 ./cwy&Lv)H"][P@sB Z0 Rɂ6|S2 RL U~)ƅB~; @,Sq+]KQqm'Y!ǒq,Bh\:T9܋( bsEet23D!+@AmZ"55LcJ'@8aHe$0uE~;k@~;)<-T@'.wNv)H%:/EBAQS᠞Ί3sE6oYV^j{KDF7bEo4ޮ{ow.|uJ%ڪ435)TrW%ULOU% -<%r1HsG& ldd--SO[Ѹܒٌ?ˎ_5\Ġ9l9ĒZYC,F,u)~ĹE*&:5ع ,6䨊.H1CD@UA &miS)P`6E /J.Fa-CA+!UktAJw"j A\_oIRm`t| Rcu) @]r RTɠI' 6=J 4d(G֠{L5P_ cW dcr*z6 +2h}~^X酕^k+/֔yA̫eY2h]]jlx^_;,IT\[moat<08@ F% rXElra'?ư6bJw)Zwapod}Oarwt_꒝KnIq<~zw=: | z}8rtYw0-?l ;-nq}MI+߿u=o;g\{d 7Q_5*"w"K]]K#Tz_/5QxSVd/l[bO6?~<~׌ȁa9گG(ϫiOK/gWN,(]2I==.gty/ooRp sNN 2AsɠdPOȶ> \ÇArwhV@0iثm&5FRN4_1jiXv3a@>8v4bTfybH?kgŐ5I`X{'(AAEeGLZBP#  h㔭cleI&V§7DhgKd$RWc$AJ G~:kkg M2 K@GuΒҴ,TkCJ1{lk I5Q@Tpӹ)`A:Gs!M`ҕ&JW22@KKeP/Sʐ\)K71LvT @{-a˴-ZN[jf&2)hC%9ChLQL^i5H@e|QYt v84LF)\Au[2 RE9AJBLKl0HI`ȫZEBMF2p1*W!DSh-#lk5Ɋkl^"6uuI!-2 Q-7FT]Tma yaCnk݀{D;^^d.waSJY7%ïA&5U g uF+V夞y׍k`xKa L! +^VVa9W^I.-A].Od''Q*o$JTJTJT:<SyG+'vR"sm ,6F1Ԃ Ԥ@""M #KGj2Ab)U&L3FLij4ajn9ɚdv`Tݣzֺ>qxnus{J\ǻڊ6%C.wO_l?ׅ-P/: uǯNӟ ߌ;;}CIXiJRX/Wh)Lmy uo`˺Q熖Wֵo[Y6׹TF+hDž[Iܫo c[V3yο1ݤXB,Fcwrz=1<#hbf_~E<`?X^A[ԋ ۅ]CV8Zn@_Np$a|xuuw iG_߲ ūzl2{|4P4GDG"6h&w~ƽ_ʭMf-hOnCM=s%ѫH]AzFUkin WV>tN*ӳjGVxFݨ6_t@N0lI@V;T2P(S&!72X:;0!Ů⡒e(Kf`BqLRPEC)̟ ?HG:(P׊aw$v498 |{37o7.rBM%D]mo۸+B>. 4nv4\,щ:+f3$e[vXr."dZΐ3C)3N?ɢW)QO0Yҁ>w:JEdt#jwT҇6_1 VvB7 ߸1Bim\c9PνҀr&FUiL j  :=<̔3̟' BꋽqAq{J_OXe+li>Ћ2(3^ٜ YmN*1. wH u҂{{pI$=HFj hn` ҧÝiJvc(ʪZ[CW A6蘠"]~OnǙc t{'O_iy?:\[;gUTݱ oQ==(bx`O.>il~,zod/&e{돽Gl2eE3VsI}zKuV[,fz=R;L?㆘G cETVy㒴{mc7<[XIh\lMle@VFͰcxYPS :/`b^KՏ~Y'LO^,Pr,Tia}N=ixO*P,E(zV< ~6?N }gcۄSEve4qKS&nB=*}"~/S=A{icA G 3T >{mS.upB0ѮZ竸wѭ!Л!˷l#yRb}5zh"8zpHߺڮywӉ*{OG*Z_aG׭p'y6>U`oq¦%hrؖ[m]l˯Ole2.YVNzb9JeKP^AO%J݅~BS4D,HeZƅ߀pcf<$s03g{Nacr6S{a-\_0}J5gq-agWnUn,ߙcupfiR}8:~sVxO6jXa-wb)wmܱ`, *FZKxj: bta>1W6 ҬhҧJ2%JvX'Nҏ)!!́5wXP Wt<` zP2 %G"RIAމJV qg%R)ѣ %P=inBGC]U6 AWGJ_V5\ ##P4W,F{k a(}1&ğ-DtryIH)Hkԛ3RO9 )n8D<7gvZDKKZzXW:F) Xrۮs4܄J5J1(H)Jsԟ̞|T 7̈́k9-աߩu󽎼)¯[qGϹ1җ/Ո'I&wpnZPbK])mX%!)])fTeW ;b`->OrݡQ^XŎ%Wܽׄp@{^ПhC;BT2jHzx~"qb58@><>oOzSyKD<[Dݙ:%ݗDO8?R(umAS1ok{/bWSՆuRPc:{ # U+7 wT3۽AxUΚB`"¤ato]F# );< Cւ#0HA.c8%E 4XRFB$*44+җ~,2XК܋(c;ͽ<}"!e!K(JbsXHG #b">O(%QS7!@;qcyvas, R)h'7 S{& v:2R [-Qlm;FBFA~os漭m[hOi#[LY.i[ Snٶ.1٩+Ƴ%5%ibYuMBٺ/c}=ix&iݴipew aC$%¹y< x֥%E$j߾G<ԎUߵQLhl o̅,J$!} P t[L*eҥD$D g*Ys@ \&ʤ_'j(xx b}١0ƧDSAʾh:23.$O݀#+NWD_٧>tF\&2.{V3TՄzL]^ZA@<΀Jy 8\= _uT.dj|8{{$֣FKyy48XܟK7wM;S'Cs=ׯỏ΋?a8eEw$/&@9[o/pp^ٝqw< K/ol 08/77oVeM^ݠry]lD70(N W~<}FB ӞU~[Ώa6}{ud1㐾.1<`58?eȧ#јRWiN؊ Eq{O' r*gP>VIOf*,EbV?ǃU6eO !AAD%o&<6o;i@@iyqR?`|y&R u3VLx?ǫT N-Hu(IdK])x= vHuTTGmx+Հ+vGh"Wm-5R1@J9 V8qPMǀq92zHHq[J1(Hm؛zRRz19iU[[])DZES'pclPMvbޕj̕ZKLDR1h"5iהERb|HRv8oCTRv4wrȽ[K!_ӕ"9Bq\\),t|$ވՈcfıU: #uTE#'K螽<1m_tMl) [go^}}oݻe?,Vc6M5FA~ ^.# \qI Т2W6+ݸPpр/fŵ> "ѯO5=?]x04? :qRY_)}-zeIkkK :鸋~'tj8|vˢ|v`ŦZܢՐ+Z~%a%NGgvo Kih5؍x~KzߡUCMxʜܵ*;>E/s+`Dc SFc4jQ7s2q?4 4r;>+sL@ s>sL%a$ϧvLq$Y+\)Xa'+>JW7m5ppc8"¼SܐSLZYJ#Ž`@Sw#G)hƉ84R'jOR:SRMvwrה7q4-u0wLx&,zRGy[Ǖ 1pػRu+9R 1"ZJʕTc!+ۮOԞ3Tcбati[x(HW5SU"YVWn MtLt0TBnRk[]\)渌6manyb.=J[-zYzIsir_bՕ2鋔vD]\)h=.ӟנ+ũ,4=GyR#$Aq#7snvX"wZJ1+ۙӒ*ķsxj¸J+rd6ȧ\)xޖn5a~_8 ɐiA!}}b͕&WJ~WaRJaG]aNV+՘+v6L \_Ec^5֖2giZup6󟦪O#m 2C!c޼4`^ܒMF`NpYO PO >Ku'촟( fb+_ta_'o_=V@0_(5$bR0STbL,k%I$b2{apك fVYJQQD9e=ed/ũLF;V!0!T=GeXh)s&/`Ӫ'0D'r3ѝj룽ήF Z'>Q<3GF+.>\P3_ssU0i/:c87à0=~UVAX8|h .|1L''Cx4ˇu RCZ(XK8Aw|*Obe?~f a×hֶe<:t '?3 i:^u? 6b/ f n OA>o}"i9/+kfyF>W* 6UjqGGQ_̟frx|S]#[S֜}=ܫw<m]Qh-Z-ʷ6kn>ՌD f11x4?7ݍw,ydOfŮfz`3G>AnأNTai9 a߳cPpJ I`z m;8'RsU%i\_)jKd6w2c ]Q_{~ךY:#]{SF*O` ;CLC`?iu7v}IByݕ/ذ$[I6) 9{=JƸl~?ưliғ-W)WoI*BYwoˇSgT)gK. ~qM_!V8Vrlઁl ͩG+7oIAP1aq[oMXG ,D@ᚗ3yo^g\"׷*׷@I0 =gYzg++W4-)/3*҈l{W~[)-'2xą0ib)"v5%)Cdnl5T8/U"c@X-MΟH|4Bd.vF2@ESn~spYLRPir#S?!uYZ[U̜ן(!{elH_!M}="@GBPJ+0>DBBa8 P,! (!1HY3_rlxf]~l:sx>Åܳ39Zۮ=Ŗ :@akLz0!4Ni8e(;}6q5k7#6ۭuuO}mc?z0v(x C-(Ҟ3u{mNk,[8#Yu)W㺌%i-5+aQ$n.NpꦭG{y$(EPF7:5UW4%H!l_4#򨂳(40%Kx7gC'Q1A?0HԮ5S +NLd)5TTٟ(Zc]Ϣ[ՓN_[A1T [%.EZ\9;ܿ8ڶ\ڗ=!tm4h&`먯BҒLݤDtڥJF%h EPE] 5Oah<[(M堖βW$_p U+}1rIB'OUpWxMqQʟ,Z O|Ż7_bl Hd#|{_@ 7=sω|kKZv,¦gزab$Yb9EiTm%1DUI(d%8P !b C@oEA êYRU i3ZzBhRPRJ QH)ER]AjmBC>'8f,-`)X6K´®]xU*p0?rm/KʼnhI=|iAybypg[EXjH )Ȉ;J8ڨfZYINU2p3R& o[{:*=VGU{.ƭƣ)K^<'U,uvIzRs(a'SvW@,Q!H⒬G7Sܻn*^ReUrCyK8/^H1dI$K ąR% )Ecd SJ R▜ԪIpq[ J9]|e<#Y9J 8(D*(]F,KO:WC, $(y %'7]iq]A mռ]סgc'*MÑgkeigfi#yIOi_ǂlV lb9;ӹuqэUҿ>jo?J뮹Q;:'Ww'?}u4覿k]_m7c(y? ~BY=؈Kdx\ _Ptp;~6XǵsB鴔r{ Ӯ9nX\h_u+/E&![b39&fm%PJm 8F dE&2] a$" Ny"*S–t"9`E}Jk 1KT'Z**\DKe*@)k%(k2tE* R$r+(P˃J[]p"UǢ9L`D:x2j)^/J* 5\8鏂˾ظzV k:/7Ѱgdk=ox&^4AƣxԬFףQow"?'ב]]¶FA߻v`4.x<ԩ&T o90O y!=Mpj.b>]p);]PVxԡfm'} ǝQgj`/ѕ 1,B@ɉJ%/ P;}SV- *yRP vʸv''}SQ!eWT TiB R-\OTCù^&:͚o3G+t >Ļ+S]d)>jr΁嫞7ap$]O[q7lxd$j׍>yn9SXT?Ew3Vfʪ<ʴE|#=?Ɓ0N~SkXlܵݚI]zj`05pɃ20}e9Փ| tӉ \LT:O}i*=ph۔pH[:uqS5S(kؤzH&LW;OR[?G&[s? ;7lv Xc VV? E@-C㈰d/73>>/y~f  ʍg1#]=šIЊw|Lb)LºHQR$#,+/8+3_^EDPFJ{ñZQx ;b+c1Bs]~/-xO{^1ꎦ X#wڞ:tL='G}r| WGaH/ibQIZA(%0 8I@|Fk_h镅+ѫrq;4}_AL˜oA;Ns=s#.BF> j{~6F:H`# (L%,8[p(-q; 2y4 钹풹YKJ{2 m0ru(Ud$s![I\j:Y`,ĖkM1.\o/c$$Zj޼dV ~u(m-x|DWVhߌ_֡~{g7O>XO 4ۣW/ϛQQg4H;F\ڻ= #}LqU_Fa[T(2cTʪԿvY`}VG/0??W?j?̧\?}zl"?Pf\}jp*Y|J<oz?-kwղbYpyW-_xBkg3&դ0͍*)fo^5SS[y^59^]Y: F^}~KF4Xo߬NOU=U-__'BEI8*elׇ2Ϗ_u򥾠 d(_9ul#nFDGQ3{VvhUOѺ~~şG/N 5OΛƣQg'UN%uh,BSWg p ڣQՆϧkL~Kᑡ[3&,ouQԔ8ϣLnN:L(yFWYe;܃t`qVBWt;Ԅ.wzg%&?"]͜%^oHzI+W=5h,&YaW";_>MzIe/%(dQXw넞r*ƹk;u"-'߯W0u[ ,> @""VʒbY˳!<40@v|JgAXV/Ga*5|d Y.λ7But] Wk[W)t-d+S>g,c*oB-5pW5ZneD-H O#}FADfqx, b=_hF+Y0 ]!r/{|0[3_[;$F]r%e½DgʲjgK w=ɍ)L͚wڭɫw-cɻ'oYKDC7,E-1 s<%P5C&"snr ~ JC5i2f3d%d0(ǔ t3Geev׎A -WB-)n0h{WPXPBJ(ߢBhDY8i1c$4lڗBl7pcj"T:%6:C l<+gMtvNRT[9b? @j (= a HPLY+@ıOrUcn@P+)?F/3_QReݭ!bǖm%t:A~E]ff6". ]lZXvo50FꄫhTҍ}mlfŲ$eHL༧t8/W`4v GعwB+=loZ޿I?'S%  %xϻ/ݯ} @m n}H)߬G[}saU/|‚><3?4N=7^׻V2qz&Q᠙l]OQR_/^LFtȒèߞf8܃`g(1TPTI ]e`˔G բ0#Rw.59ܡ d@A"#}2&1Bw Qa\tvjl ^Wu<ز9T.G ~)OO˪-.+Y$|4q6ø4grFV0慨ydYA@ RL .5|&һyQy"*ٙRP#/|tíz-Q%\lZ ZגC6V FiSw= ^ "~rԐ1? lS_UDJMDJMDJMe>>'VjLe'4mJ'+u #|;"#q#4 ,){$#AJ+d$d=Fu==bԼqiA19HH#%YC%F#T, - Rӌ%Vt@AH,D,F)|usIIB,YH]7lQRYH~%é.zd0*ju")-'CRh"d)2yet 2Wd H#K80A@wHKіB)u\C iA4$f/"s7nPi- \ ֺIR)bHϹ*iLLɖ3 gr2gr7Y)7g"uW6/oGqqEg4'qYŗ^8YD8l[U9 RoG?pbgvzz=<v.ǂ;QOY`՘ĚBEYX %5JQغ^~;χw ߮FeN٘6{CIveN7H2/5Kl 14749ms (19:34:16.161) Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[1478363106]: [14.749648202s] [14.749648202s] END Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.161882 4932 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.161938 4932 trace.go:236] Trace[1226745459]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:34:03.553) (total time: 12607ms): Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[1226745459]: ---"Objects listed" error: 12607ms (19:34:16.161) Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[1226745459]: [12.60775738s] [12.60775738s] END Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.162299 4932 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.167226 4932 trace.go:236] Trace[923529667]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:34:03.825) (total time: 12340ms): Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[923529667]: ---"Objects listed" error: 12340ms (19:34:16.166) Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[923529667]: [12.340651489s] [12.340651489s] END Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.167290 4932 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 19:34:16 crc kubenswrapper[4932]: E0218 19:34:16.168329 4932 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.170678 4932 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.172329 4932 trace.go:236] Trace[1107073082]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:34:04.055) (total time: 12116ms): Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[1107073082]: ---"Objects listed" error: 12116ms (19:34:16.171) Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[1107073082]: [12.116578067s] [12.116578067s] END Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.172384 4932 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.185936 4932 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.432677 4932 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36880->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.432733 4932 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36896->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.432771 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36896->192.168.126.11:17697: read: connection reset by peer" Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.432765 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36880->192.168.126.11:17697: read: connection reset by peer" Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.433271 4932 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.433308 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.101548 4932 apiserver.go:52] "Watching apiserver" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.105767 4932 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.106817 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.109005 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.109068 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.109139 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.109210 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.109209 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.109303 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.109482 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.109460 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.109575 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.113526 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.113931 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.115492 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.116913 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.116994 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.117523 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.117763 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.117816 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.117952 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.120009 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 20:51:47.765976753 +0000 UTC Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.142769 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.159123 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.176018 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.176091 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.176129 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.176168 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.176714 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.176814 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.176903 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:17.676867092 +0000 UTC m=+21.258821977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.177307 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.177416 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:17.677387793 +0000 UTC m=+21.259342678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.177455 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.177506 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.177543 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.177594 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.177630 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.177662 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.177990 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.178331 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.178456 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.178978 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.179032 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.179114 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.179269 4932 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.182419 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.189155 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.190279 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.204362 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.204406 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.204429 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.204510 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:17.704485028 +0000 UTC m=+21.286439913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.205886 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.206876 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.207698 4932 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.209244 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.215564 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.215602 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.215624 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.215696 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:17.715674123 +0000 UTC m=+21.297629008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.222213 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.225843 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.242708 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.259418 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.276347 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280124 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280225 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280268 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280311 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280358 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280415 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280453 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280490 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280529 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280561 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280595 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280631 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280669 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280706 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280742 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280775 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280773 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280810 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280992 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281002 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281045 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281092 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281137 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281169 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281230 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281266 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281304 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281317 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281340 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281377 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281411 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281447 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281480 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281522 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281556 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281718 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281771 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281808 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281843 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281876 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281924 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281961 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281996 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282031 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282068 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282102 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282137 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282196 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282231 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282265 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282307 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282346 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282381 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282414 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282447 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282480 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282516 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282553 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282589 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282626 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282661 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282693 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282729 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282766 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283136 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283204 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283240 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283279 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283314 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283351 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283391 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283428 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283462 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283498 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283533 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283570 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283605 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283637 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283673 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284536 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284598 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284639 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284675 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284708 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284743 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284777 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284810 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284846 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284883 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284924 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286030 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.298224 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283083 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283103 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283715 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284252 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284295 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284271 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284545 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284580 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.285031 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.285122 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.285574 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.285860 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.285893 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286026 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.286041 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:34:17.785997866 +0000 UTC m=+21.367952751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.307947 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308144 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308238 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308350 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308406 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308427 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308541 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308591 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308879 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308922 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308958 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308994 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309028 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309061 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309095 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309129 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309164 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309234 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309279 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309328 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309361 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309396 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309428 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309460 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309494 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309530 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309573 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309609 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309650 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309689 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309744 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309797 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309834 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309874 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309912 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309945 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309981 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310015 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310053 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310092 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310128 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310163 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310231 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310265 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310302 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310335 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310371 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310395 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310409 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310503 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310581 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310666 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310843 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310970 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311038 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.307544 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311131 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311241 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311321 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311395 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311442 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311519 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311590 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312010 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312121 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311632 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312281 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312387 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312465 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312503 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312578 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312644 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312680 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312746 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312812 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312848 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312910 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312946 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313013 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313078 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313113 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313233 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313273 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313339 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313373 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313437 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313500 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313540 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313603 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313642 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313708 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313783 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313817 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313885 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313956 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313994 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314069 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314130 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314167 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314237 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314305 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314294 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314342 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314411 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314480 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314518 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314607 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314684 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.315792 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.307857 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286038 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286142 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286152 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286200 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286597 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286691 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.287636 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286549 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.300094 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.301595 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.301896 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.301921 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.302119 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.302477 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.302669 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316395 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.302986 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.303852 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.303912 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.303945 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.304943 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.305688 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.305689 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.306435 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.306446 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.306569 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.306655 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.307158 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316424 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316466 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316676 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316673 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316769 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316818 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316893 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316991 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.317238 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.317326 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.317556 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.317754 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.318125 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.318446 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.318541 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.318750 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.318812 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319062 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319224 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319316 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319392 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319424 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319521 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319763 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319962 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319986 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.320424 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.320787 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.321559 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.321851 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.321961 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.322800 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.322814 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323333 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323417 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323469 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323520 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323566 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323606 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323606 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323651 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323696 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323926 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.324366 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.324428 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325040 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325197 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325244 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325297 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325312 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325609 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325634 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325659 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325983 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325998 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.326063 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.326476 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.327113 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.327421 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.327565 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.327683 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.327983 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.328237 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.328256 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.328443 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.328765 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.329255 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.329542 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.329680 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.329921 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.329966 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.330921 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.331445 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.331541 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323829 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334015 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334024 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334266 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334282 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334293 4932 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334307 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334320 4932 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.333833 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334331 4932 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334408 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334431 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334447 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334468 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334462 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334509 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334504 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334530 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323983 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334572 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334625 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334653 4932 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334673 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334688 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334701 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334718 4932 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334734 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334753 4932 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334770 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334787 4932 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334809 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334829 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334852 4932 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334869 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334886 4932 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334904 4932 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334923 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334949 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334967 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334985 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335005 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335023 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335040 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335059 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335079 4932 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335097 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335487 4932 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335502 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335518 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335531 4932 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335544 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335559 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335576 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335590 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335603 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335616 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335633 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335647 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335661 4932 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335678 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335692 4932 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335706 4932 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335721 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335739 4932 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335753 4932 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335767 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335779 4932 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335792 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335804 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335818 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335830 4932 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335842 4932 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335855 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335869 4932 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335883 4932 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335897 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335913 4932 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335924 4932 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335940 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335952 4932 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335964 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335977 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335989 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336001 4932 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336015 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336034 4932 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336047 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336060 4932 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336075 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336119 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336133 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336145 4932 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336157 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336188 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336202 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336217 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336230 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336264 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336281 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336299 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336316 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336335 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336354 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336372 4932 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336389 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336402 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336415 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336428 4932 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336440 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336457 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336469 4932 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336481 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336496 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336509 4932 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336505 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336526 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336618 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336646 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336670 4932 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336691 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336713 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336733 4932 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336755 4932 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336778 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336799 4932 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336820 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336841 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336861 4932 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336880 4932 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336900 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.338346 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.338691 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.339031 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.339269 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.339495 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.340423 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.341119 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.341428 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.341571 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.341761 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.341901 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.341822 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.342190 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.342232 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.342378 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.342453 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.342517 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.342688 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.342861 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.343078 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.343398 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.343874 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.344251 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.344486 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.344581 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.345649 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.346282 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.353026 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.353329 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.353765 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.353794 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.360810 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.360971 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.361054 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.363611 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.363656 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.363976 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.364348 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.365881 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.358363 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.366065 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.366202 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.366384 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.367430 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.367826 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.367988 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.368531 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.369820 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.371333 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.371518 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.371616 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.371689 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203" exitCode=255 Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.371732 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203"} Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.372030 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.372345 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.372778 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.372771 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.372872 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.373346 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.373456 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.373585 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.374606 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.375296 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.375207 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.375718 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.376135 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.376430 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.384420 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.391696 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.395066 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.395822 4932 scope.go:117] "RemoveContainer" containerID="5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.403961 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.405742 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.419301 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.421043 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.424291 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.425438 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.430917 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437092 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437365 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437391 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437411 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437421 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437429 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437438 4932 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437449 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437459 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437468 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437478 4932 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437489 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437498 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437507 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437516 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437525 4932 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437533 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437542 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437550 4932 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437559 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437567 4932 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437576 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437585 4932 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437594 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437603 4932 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437615 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437625 4932 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437634 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437644 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437653 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437662 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437670 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437679 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437687 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437697 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437706 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437715 4932 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437724 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437733 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437741 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437751 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437759 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437767 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437777 4932 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437788 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437797 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437806 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437815 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437825 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437833 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437843 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437851 4932 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437859 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437868 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437877 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437885 4932 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437894 4932 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437902 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437911 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437920 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437929 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437940 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437949 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437959 4932 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437969 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437977 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437986 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437995 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.438005 4932 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.438014 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.438024 4932 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.438032 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.440596 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.440668 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.449722 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.457501 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: W0218 19:34:17.463781 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-47a93e424f270429cc3ea56dd41b3f224ad931ad27285afa8472ffece38f375b WatchSource:0}: Error finding container 47a93e424f270429cc3ea56dd41b3f224ad931ad27285afa8472ffece38f375b: Status 404 returned error can't find the container with id 47a93e424f270429cc3ea56dd41b3f224ad931ad27285afa8472ffece38f375b Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.475900 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.490913 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.510411 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.524514 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.538897 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.740497 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.740535 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.740558 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.740582 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740677 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740739 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:18.740721621 +0000 UTC m=+22.322676466 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740746 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740750 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740761 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740777 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740782 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740791 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740822 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:18.740810223 +0000 UTC m=+22.322765068 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740686 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740839 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:18.740832773 +0000 UTC m=+22.322787618 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740866 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:18.740850013 +0000 UTC m=+22.322804858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.841388 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.841653 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:34:18.841603684 +0000 UTC m=+22.423558569 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.120922 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:49:54.69348219 +0000 UTC Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.178705 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.178850 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.379614 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.382922 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5"} Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.383345 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.385628 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"47a93e424f270429cc3ea56dd41b3f224ad931ad27285afa8472ffece38f375b"} Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.389366 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64"} Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.389437 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b"} Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.389460 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2cd2fca04fdb6eed057d0b9ccad0238d16ec7b43dc6b6798111340d0d78114c9"} Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.392245 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37"} Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.392281 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e2c7d37280f8d9292dac622ef6e34fc28791cd83fb2faf87fc669fbbd302e899"} Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.404929 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.420987 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.445319 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.467622 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.486658 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.508224 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.530254 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.553407 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.576613 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.598358 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.621079 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.643852 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.664797 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.682641 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.750597 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.750711 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.750751 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750782 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750818 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750831 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.750799 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750891 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750963 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:20.750945962 +0000 UTC m=+24.332900797 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750959 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750995 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750981 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:20.750973843 +0000 UTC m=+24.332928688 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.751014 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.751111 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:20.751086075 +0000 UTC m=+24.333040950 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.751129 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.751225 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:20.751204058 +0000 UTC m=+24.333158913 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.851754 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.851956 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:34:20.851922398 +0000 UTC m=+24.433877243 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.121311 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 00:17:03.566662577 +0000 UTC Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.178676 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.178727 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:19 crc kubenswrapper[4932]: E0218 19:34:19.178829 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:19 crc kubenswrapper[4932]: E0218 19:34:19.178992 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.183457 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.184371 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.185342 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.186139 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.186911 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.187572 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.188380 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.189148 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.189948 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.190693 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.191444 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.192463 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.193142 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.193868 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.196863 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.197732 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.198819 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.199631 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.200757 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.201922 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.202782 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.203610 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.204331 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.205372 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.205954 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.206922 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.210511 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.211198 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.211947 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.212778 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.213474 4932 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.213623 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.215473 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.216109 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.216798 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.219764 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.220816 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.221546 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.222221 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.222887 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.223401 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.223989 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.224698 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.225318 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.225765 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.226313 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.226859 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.228926 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.229811 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.233043 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.233814 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.234799 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.236621 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.237297 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.913594 4932 csr.go:261] certificate signing request csr-r8shz is approved, waiting to be issued Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.932548 4932 csr.go:257] certificate signing request csr-r8shz is issued Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.992233 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-jmmxw"] Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.992540 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.992863 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bz9kj"] Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.993223 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.995134 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.995508 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.995677 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.995840 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.995949 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.996045 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.996137 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.008000 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.013545 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.016497 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.025111 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.035794 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.049382 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.062324 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.062806 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45a22d6d-69dc-4c93-acd4-188dc6d1e315-serviceca\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.062868 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a22d6d-69dc-4c93-acd4-188dc6d1e315-host\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.062927 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb9jl\" (UniqueName: \"kubernetes.io/projected/4495ae98-57db-4409-87a7-56192683cc00-kube-api-access-wb9jl\") pod \"node-resolver-bz9kj\" (UID: \"4495ae98-57db-4409-87a7-56192683cc00\") " pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.062950 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnkr8\" (UniqueName: \"kubernetes.io/projected/45a22d6d-69dc-4c93-acd4-188dc6d1e315-kube-api-access-dnkr8\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.062980 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4495ae98-57db-4409-87a7-56192683cc00-hosts-file\") pod \"node-resolver-bz9kj\" (UID: \"4495ae98-57db-4409-87a7-56192683cc00\") " pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.075218 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.092297 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.098885 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.105077 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.120768 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.122992 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:17:06.133696671 +0000 UTC Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.134662 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.147530 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.161243 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.163454 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45a22d6d-69dc-4c93-acd4-188dc6d1e315-serviceca\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.163480 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a22d6d-69dc-4c93-acd4-188dc6d1e315-host\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.163521 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb9jl\" (UniqueName: \"kubernetes.io/projected/4495ae98-57db-4409-87a7-56192683cc00-kube-api-access-wb9jl\") pod \"node-resolver-bz9kj\" (UID: \"4495ae98-57db-4409-87a7-56192683cc00\") " pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.163536 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnkr8\" (UniqueName: \"kubernetes.io/projected/45a22d6d-69dc-4c93-acd4-188dc6d1e315-kube-api-access-dnkr8\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.163553 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4495ae98-57db-4409-87a7-56192683cc00-hosts-file\") pod \"node-resolver-bz9kj\" (UID: \"4495ae98-57db-4409-87a7-56192683cc00\") " pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.163619 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4495ae98-57db-4409-87a7-56192683cc00-hosts-file\") pod \"node-resolver-bz9kj\" (UID: \"4495ae98-57db-4409-87a7-56192683cc00\") " pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.163918 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a22d6d-69dc-4c93-acd4-188dc6d1e315-host\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.166582 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45a22d6d-69dc-4c93-acd4-188dc6d1e315-serviceca\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.176528 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.178194 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.178300 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.185200 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb9jl\" (UniqueName: \"kubernetes.io/projected/4495ae98-57db-4409-87a7-56192683cc00-kube-api-access-wb9jl\") pod \"node-resolver-bz9kj\" (UID: \"4495ae98-57db-4409-87a7-56192683cc00\") " pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.188965 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnkr8\" (UniqueName: \"kubernetes.io/projected/45a22d6d-69dc-4c93-acd4-188dc6d1e315-kube-api-access-dnkr8\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.203524 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.216455 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.232568 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.246366 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.267753 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.306921 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.313866 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:20 crc kubenswrapper[4932]: W0218 19:34:20.333601 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4495ae98_57db_4409_87a7_56192683cc00.slice/crio-fa6f6c9167bc7a3cadb3ff9688f45097f368b8478fbbe0a2db6365903a12ed00 WatchSource:0}: Error finding container fa6f6c9167bc7a3cadb3ff9688f45097f368b8478fbbe0a2db6365903a12ed00: Status 404 returned error can't find the container with id fa6f6c9167bc7a3cadb3ff9688f45097f368b8478fbbe0a2db6365903a12ed00 Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.408004 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bz9kj" event={"ID":"4495ae98-57db-4409-87a7-56192683cc00","Type":"ContainerStarted","Data":"fa6f6c9167bc7a3cadb3ff9688f45097f368b8478fbbe0a2db6365903a12ed00"} Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.417297 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jmmxw" event={"ID":"45a22d6d-69dc-4c93-acd4-188dc6d1e315","Type":"ContainerStarted","Data":"5c934bcaca0245db9ce20e13c22c18dad4eafacf7520b47c08ddae956032404d"} Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.769476 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.769528 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.769559 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.769595 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769681 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769681 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769707 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769719 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769744 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:24.769727359 +0000 UTC m=+28.351682204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769761 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:24.76975394 +0000 UTC m=+28.351708785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769810 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769921 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:24.769897543 +0000 UTC m=+28.351852388 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769823 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769964 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769979 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.770016 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:24.770009575 +0000 UTC m=+28.351964420 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.819394 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jf9v4"] Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.819805 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-sj8bg"] Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.820000 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.820067 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.824004 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hbqb5"] Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.824983 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.825011 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.825075 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.825083 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.825089 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.825481 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-z7nqj"] Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.825586 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.826116 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.826768 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.826776 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.826891 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.827419 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.827493 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.827684 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.828534 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.828556 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.828596 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.828597 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.828653 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.828757 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.829668 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.829796 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.845429 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870134 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870276 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9r7v\" (UniqueName: \"kubernetes.io/projected/c2740774-23d5-4857-9ac6-f0a01e38a64c-kube-api-access-g9r7v\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870310 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-cni-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870362 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-systemd-units\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870384 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-node-log\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870404 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c2740774-23d5-4857-9ac6-f0a01e38a64c-rootfs\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870423 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2740774-23d5-4857-9ac6-f0a01e38a64c-proxy-tls\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870442 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-socket-dir-parent\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870465 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-conf-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870499 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-kubelet\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870530 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7bv7\" (UniqueName: \"kubernetes.io/projected/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-kube-api-access-j7bv7\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870585 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-config\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870606 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870627 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-ovn\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870647 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-env-overrides\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870669 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870691 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-os-release\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870710 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-cnibin\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870738 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21e3c087-c564-4f66-a656-c92a4e47fa72-ovn-node-metrics-cert\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870781 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-cni-multus\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870804 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-systemd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870824 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-etc-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870846 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-netd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870868 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp7ht\" (UniqueName: \"kubernetes.io/projected/1b8d80e2-307e-43b6-9003-e77eef51e084-kube-api-access-lp7ht\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870889 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-slash\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870923 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-netns\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870948 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-var-lib-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870967 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-log-socket\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870993 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-netns\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871041 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871062 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-k8s-cni-cncf-io\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871100 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-kubelet\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871120 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-hostroot\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871141 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871170 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-script-lib\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871229 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cnibin\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871249 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-ovn-kubernetes\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871278 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-bin\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871298 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-system-cni-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871342 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b8d80e2-307e-43b6-9003-e77eef51e084-cni-binary-copy\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871368 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-cni-bin\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871397 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-etc-kubernetes\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871419 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2740774-23d5-4857-9ac6-f0a01e38a64c-mcd-auth-proxy-config\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871441 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-daemon-config\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871476 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-system-cni-dir\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871496 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-os-release\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871517 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-multus-certs\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871537 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnfjd\" (UniqueName: \"kubernetes.io/projected/21e3c087-c564-4f66-a656-c92a4e47fa72-kube-api-access-xnfjd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.871653 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:34:24.871634335 +0000 UTC m=+28.453589190 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.875494 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.897817 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.918543 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.932094 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.934091 4932 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-18 19:29:19 +0000 UTC, rotation deadline is 2026-11-09 16:10:47.839264864 +0000 UTC Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.934116 4932 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6332h36m26.905152099s for next certificate rotation Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.947385 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.957684 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972087 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-systemd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972125 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-etc-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972143 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-netd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972159 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp7ht\" (UniqueName: \"kubernetes.io/projected/1b8d80e2-307e-43b6-9003-e77eef51e084-kube-api-access-lp7ht\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972193 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-slash\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972211 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-netns\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972227 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-var-lib-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972241 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-log-socket\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972241 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-systemd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972273 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-etc-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972307 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-netd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972318 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-netns\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972288 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-netns\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972354 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-log-socket\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972257 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-netns\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972357 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-var-lib-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972393 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-slash\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972426 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972446 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-k8s-cni-cncf-io\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972461 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-kubelet\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972477 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-hostroot\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972495 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972512 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-script-lib\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972529 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cnibin\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972544 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-ovn-kubernetes\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972559 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-bin\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972575 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972592 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-system-cni-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972609 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b8d80e2-307e-43b6-9003-e77eef51e084-cni-binary-copy\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972617 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972625 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-cni-bin\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972643 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-etc-kubernetes\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972663 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2740774-23d5-4857-9ac6-f0a01e38a64c-mcd-auth-proxy-config\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972686 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-daemon-config\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972704 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-system-cni-dir\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972719 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-os-release\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972733 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-multus-certs\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972748 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnfjd\" (UniqueName: \"kubernetes.io/projected/21e3c087-c564-4f66-a656-c92a4e47fa72-kube-api-access-xnfjd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972762 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9r7v\" (UniqueName: \"kubernetes.io/projected/c2740774-23d5-4857-9ac6-f0a01e38a64c-kube-api-access-g9r7v\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972777 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-cni-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972802 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-systemd-units\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972816 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-node-log\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972830 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c2740774-23d5-4857-9ac6-f0a01e38a64c-rootfs\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972845 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2740774-23d5-4857-9ac6-f0a01e38a64c-proxy-tls\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972861 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-socket-dir-parent\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972870 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-kubelet\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972876 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-conf-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972902 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-conf-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972910 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-kubelet\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972935 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7bv7\" (UniqueName: \"kubernetes.io/projected/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-kube-api-access-j7bv7\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972952 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-config\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972967 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972983 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-ovn\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972996 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-env-overrides\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973012 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973072 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-os-release\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973088 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-cnibin\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973103 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21e3c087-c564-4f66-a656-c92a4e47fa72-ovn-node-metrics-cert\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973118 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-cni-multus\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973163 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-cni-multus\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973203 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-k8s-cni-cncf-io\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973222 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-kubelet\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973348 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-hostroot\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973471 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973437 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973543 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-system-cni-dir\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973583 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cnibin\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973617 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-script-lib\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973623 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-ovn-kubernetes\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973661 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-bin\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973763 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-os-release\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973859 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-os-release\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972822 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973974 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-env-overrides\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974212 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-socket-dir-parent\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974371 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974398 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-cni-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974401 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-multus-certs\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974421 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-ovn\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974443 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-node-log\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974448 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-cni-bin\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974480 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-system-cni-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974467 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-systemd-units\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974521 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c2740774-23d5-4857-9ac6-f0a01e38a64c-rootfs\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974532 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-config\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974769 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974774 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-daemon-config\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974795 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-etc-kubernetes\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974827 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-cnibin\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.975004 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b8d80e2-307e-43b6-9003-e77eef51e084-cni-binary-copy\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.975350 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2740774-23d5-4857-9ac6-f0a01e38a64c-mcd-auth-proxy-config\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.979711 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21e3c087-c564-4f66-a656-c92a4e47fa72-ovn-node-metrics-cert\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.981665 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2740774-23d5-4857-9ac6-f0a01e38a64c-proxy-tls\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.988641 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.994014 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9r7v\" (UniqueName: \"kubernetes.io/projected/c2740774-23d5-4857-9ac6-f0a01e38a64c-kube-api-access-g9r7v\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.994713 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp7ht\" (UniqueName: \"kubernetes.io/projected/1b8d80e2-307e-43b6-9003-e77eef51e084-kube-api-access-lp7ht\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.997053 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnfjd\" (UniqueName: \"kubernetes.io/projected/21e3c087-c564-4f66-a656-c92a4e47fa72-kube-api-access-xnfjd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.998120 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7bv7\" (UniqueName: \"kubernetes.io/projected/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-kube-api-access-j7bv7\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.003144 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.015146 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.028081 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.042194 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.057697 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.072909 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.089015 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.111394 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.124254 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:27:54.903012395 +0000 UTC Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.128206 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.137234 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.141557 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.144912 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sj8bg" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.152552 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.158884 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.159262 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:21 crc kubenswrapper[4932]: W0218 19:34:21.162663 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b8d80e2_307e_43b6_9003_e77eef51e084.slice/crio-ad82d7c7e3d13edc7c6c58889152a96019b0d2f44dc6850ffc0f02270eae2a47 WatchSource:0}: Error finding container ad82d7c7e3d13edc7c6c58889152a96019b0d2f44dc6850ffc0f02270eae2a47: Status 404 returned error can't find the container with id ad82d7c7e3d13edc7c6c58889152a96019b0d2f44dc6850ffc0f02270eae2a47 Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.170016 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: W0218 19:34:21.170609 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21e3c087_c564_4f66_a656_c92a4e47fa72.slice/crio-1becd3aaad487cc81da8ef3a1202626206425a186289801483cd534c986b4c0d WatchSource:0}: Error finding container 1becd3aaad487cc81da8ef3a1202626206425a186289801483cd534c986b4c0d: Status 404 returned error can't find the container with id 1becd3aaad487cc81da8ef3a1202626206425a186289801483cd534c986b4c0d Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.178493 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.178559 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:21 crc kubenswrapper[4932]: E0218 19:34:21.178623 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:21 crc kubenswrapper[4932]: E0218 19:34:21.178707 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.181524 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.195320 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.205009 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.218631 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.230705 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.421258 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerStarted","Data":"e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.421648 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerStarted","Data":"ad82d7c7e3d13edc7c6c58889152a96019b0d2f44dc6850ffc0f02270eae2a47"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.422666 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.422704 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"6673604ef23c936990fd3a8cd5650ce53797b3756c5d09c8a2d50e5da9e76dc9"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.423613 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159" exitCode=0 Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.423661 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.423678 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"1becd3aaad487cc81da8ef3a1202626206425a186289801483cd534c986b4c0d"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.430941 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.433624 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jmmxw" event={"ID":"45a22d6d-69dc-4c93-acd4-188dc6d1e315","Type":"ContainerStarted","Data":"73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.437652 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerStarted","Data":"44d780ab0506509c5aaeb1e360d306fcb01135fed3f85c63db86c122cb10c676"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.438435 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.438936 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bz9kj" event={"ID":"4495ae98-57db-4409-87a7-56192683cc00","Type":"ContainerStarted","Data":"9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.455750 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.473324 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.497607 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.509536 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.525750 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.538844 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.552957 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.566689 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.585755 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.601717 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.615644 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.629430 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.642309 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.659403 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.673726 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.690574 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.700460 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.712684 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.724902 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.737847 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.751941 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.765350 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.791785 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.805321 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.819816 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.835167 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.848104 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.125400 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:52:03.014328623 +0000 UTC Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.178267 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:22 crc kubenswrapper[4932]: E0218 19:34:22.178420 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.446383 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.446768 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.446787 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.446800 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.446810 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.446821 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.447946 4932 generic.go:334] "Generic (PLEG): container finished" podID="e77eb8d5-cd29-49ef-9080-4cb12d3afa09" containerID="eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06" exitCode=0 Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.448038 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerDied","Data":"eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.449733 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.466823 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.485488 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.502080 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.514748 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.534146 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.552096 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.567516 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.568445 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.572785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.572849 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.572863 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.572992 4932 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.579887 4932 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.580353 4932 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.580328 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.582049 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.582099 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.582116 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.582141 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.582159 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.594846 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: E0218 19:34:22.611868 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.616187 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.616648 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.616684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.616696 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.616714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.617033 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.632036 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: E0218 19:34:22.632744 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.636819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.636860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.636871 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.636887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.636900 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.650496 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: E0218 19:34:22.652134 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.664629 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.664694 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.664712 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.664740 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.664758 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.671221 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: E0218 19:34:22.683374 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.687487 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.687525 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.687536 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.687555 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.687567 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.688347 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: E0218 19:34:22.699354 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: E0218 19:34:22.699522 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.701727 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.701802 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.701816 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.701834 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.701846 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.702975 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.718453 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.733735 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.749836 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.773040 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.790201 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.804617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.804662 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.804673 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.804690 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.804701 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.810515 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.828142 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.843148 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.857908 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.872064 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.890483 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.907502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.907974 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.908025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.908045 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.908055 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.913859 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.930931 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.011502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.011596 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.011616 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.011646 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.011665 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.117070 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.117167 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.117235 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.117271 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.117298 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.126305 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:17:20.872520739 +0000 UTC Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.178951 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.179028 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:23 crc kubenswrapper[4932]: E0218 19:34:23.179214 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:23 crc kubenswrapper[4932]: E0218 19:34:23.179355 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.220209 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.220264 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.220277 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.220296 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.220312 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.323708 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.323776 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.323795 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.323821 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.323841 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.426638 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.426697 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.426716 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.426742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.426764 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.458539 4932 generic.go:334] "Generic (PLEG): container finished" podID="e77eb8d5-cd29-49ef-9080-4cb12d3afa09" containerID="017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78" exitCode=0 Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.458631 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerDied","Data":"017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.485034 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.513134 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.531886 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.531970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.531994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.532032 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.532058 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.536529 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.564101 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.593787 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.608908 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.631014 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.635077 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.635118 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.635126 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.635142 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.635152 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.649228 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.659384 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.673824 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.684400 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.697918 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.710017 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.722421 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.737930 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.737970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.737983 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.738002 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.738017 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.840972 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.841044 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.841071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.841105 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.841127 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.943916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.943975 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.943992 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.944016 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.944034 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.047775 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.047837 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.047854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.047882 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.047899 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.127342 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:08:32.346828049 +0000 UTC Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.150978 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.151024 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.151034 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.151053 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.151062 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.178747 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.178934 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.254150 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.254265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.254290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.254321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.254347 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.357655 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.358215 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.358241 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.358266 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.358283 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.461240 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.461295 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.461314 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.461340 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.461359 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.465866 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.468472 4932 generic.go:334] "Generic (PLEG): container finished" podID="e77eb8d5-cd29-49ef-9080-4cb12d3afa09" containerID="600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699" exitCode=0 Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.468562 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerDied","Data":"600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.491431 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.504845 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.516903 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.527783 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.540866 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.558694 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.563487 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.563510 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.563518 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.563532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.563541 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.573359 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.590770 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.606412 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.629133 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.647859 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.662323 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.665854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.665890 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.665903 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.665923 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.665938 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.679201 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.694504 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.768695 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.768781 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.768793 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.768809 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.768819 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.822973 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.823025 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.823077 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.823105 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823247 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823337 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823360 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823374 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823435 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:32.823372024 +0000 UTC m=+36.405326909 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823460 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823285 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823509 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823519 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823491 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:32.823467066 +0000 UTC m=+36.405422011 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823566 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:32.823550218 +0000 UTC m=+36.405505073 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823581 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:32.823573178 +0000 UTC m=+36.405528033 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.871815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.871854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.871867 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.871893 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.871917 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.924376 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.924612 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:34:32.924591564 +0000 UTC m=+36.506546419 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.975382 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.975411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.975421 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.975436 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.975447 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.078944 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.079673 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.079694 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.079719 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.079735 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.127720 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:06:06.069813304 +0000 UTC Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.182407 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:25 crc kubenswrapper[4932]: E0218 19:34:25.182572 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.183820 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:25 crc kubenswrapper[4932]: E0218 19:34:25.183944 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.189500 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.189549 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.189566 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.189586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.189604 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.292203 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.292265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.292283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.292309 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.292330 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.395431 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.395584 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.395604 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.395632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.395649 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.477559 4932 generic.go:334] "Generic (PLEG): container finished" podID="e77eb8d5-cd29-49ef-9080-4cb12d3afa09" containerID="c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218" exitCode=0 Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.477642 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerDied","Data":"c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.492999 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.497933 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.498003 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.498021 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.498048 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.498066 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.513258 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.528484 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.546801 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.567037 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.584704 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.601198 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.601233 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.601247 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.601266 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.601277 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.608810 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.629275 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.645589 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.668007 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.686477 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.703341 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.703370 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.703382 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.703401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.703413 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.706528 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.723453 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.736429 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.806145 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.806262 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.806289 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.806321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.806355 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.909462 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.909541 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.909565 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.909595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.909616 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.013099 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.013212 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.013238 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.013267 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.013287 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.116403 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.116471 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.116488 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.116515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.116534 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.128331 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:55:16.488407937 +0000 UTC Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.184908 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:26 crc kubenswrapper[4932]: E0218 19:34:26.185208 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.219748 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.219829 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.219854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.219885 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.219907 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.324510 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.324575 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.324593 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.324617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.324634 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.427855 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.427915 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.427932 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.427954 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.427973 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.486561 4932 generic.go:334] "Generic (PLEG): container finished" podID="e77eb8d5-cd29-49ef-9080-4cb12d3afa09" containerID="fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a" exitCode=0 Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.486784 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerDied","Data":"fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.508698 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.531202 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.531364 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.531375 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.531394 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.531407 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.547526 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.569265 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.591392 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.606434 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.622143 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.634686 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.634740 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.634760 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.634790 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.634811 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.639558 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.657213 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.671957 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.686508 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.708803 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.731441 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.740070 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.740323 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.740454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.740578 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.740700 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.759359 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.799121 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.843255 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.843290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.843300 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.843314 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.843323 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.938471 4932 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.957650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.957709 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.957728 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.957754 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.957778 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.061142 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.061255 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.061269 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.061296 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.061340 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.128948 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:41:27.835478124 +0000 UTC Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.165821 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.165914 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.165966 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.165993 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.166011 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.178424 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.178467 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:27 crc kubenswrapper[4932]: E0218 19:34:27.178695 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:27 crc kubenswrapper[4932]: E0218 19:34:27.180264 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.196343 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.234942 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.254612 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.269778 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.269843 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.269861 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.269886 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.269905 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.276728 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.303370 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.324234 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.339699 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.360775 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.374877 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.374957 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.374982 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.375010 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.375038 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.380358 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.394502 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.411038 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.429023 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.444283 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.465471 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.477460 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.477504 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.477513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.477531 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.477543 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.494727 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.495343 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.495386 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.501042 4932 generic.go:334] "Generic (PLEG): container finished" podID="e77eb8d5-cd29-49ef-9080-4cb12d3afa09" containerID="7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf" exitCode=0 Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.501111 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerDied","Data":"7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.511641 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.566777 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.574302 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.576159 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.579646 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.579701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.579720 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.579742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.579762 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.592064 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.606900 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.623862 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.637612 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.651231 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.668112 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.682295 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.682339 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.682350 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.682365 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.682375 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.686011 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.704356 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.717644 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.732856 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.746431 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.769742 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.785124 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.785209 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.785233 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.785263 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.785287 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.785591 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.803721 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.817852 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.832968 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.846824 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.859592 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.875336 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.889741 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.889791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.889805 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.889826 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.889841 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.890511 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.907951 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.921669 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.937651 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.952733 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.970926 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.993580 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.993628 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.993642 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.993663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.993678 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.997575 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.096762 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.096838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.096880 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.096915 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.096942 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.129796 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:08:15.385505321 +0000 UTC Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.178883 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:28 crc kubenswrapper[4932]: E0218 19:34:28.179185 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.200080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.200160 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.200180 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.200237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.200254 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.303499 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.303572 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.303590 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.303616 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.303636 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.407487 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.407548 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.407565 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.407588 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.407604 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.509207 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.509249 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.509263 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.509281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.509293 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.510400 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.510435 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerStarted","Data":"7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.533101 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.550390 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.566528 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.580686 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.583442 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.601702 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.611996 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.612048 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.612063 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.612087 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.612104 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.619653 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.635399 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.652180 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.670404 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.692362 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.712072 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.715115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.715198 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.715213 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.715231 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.715302 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.731050 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.754559 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.769131 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.782500 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.803294 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.817168 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.818661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.818745 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.818765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.818797 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.818825 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.840503 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.871363 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.891506 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.909846 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.922001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.922043 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.922061 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.922088 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.922106 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.930162 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.951752 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.967463 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.986872 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.010060 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.024536 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.024573 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.024582 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.024598 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.024606 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.028642 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.045432 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.126920 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.126962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.126971 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.126985 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.126995 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.130079 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:04:14.026352848 +0000 UTC Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.178548 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:29 crc kubenswrapper[4932]: E0218 19:34:29.178664 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.178515 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:29 crc kubenswrapper[4932]: E0218 19:34:29.179323 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.229521 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.229581 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.229600 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.229627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.229652 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.333024 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.333073 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.333092 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.333114 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.333132 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.436238 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.436321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.436342 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.436372 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.436392 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.513856 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.539228 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.539320 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.539341 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.539373 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.539393 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.642651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.642717 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.642735 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.642766 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.642790 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.747634 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.747679 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.747690 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.747705 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.747717 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.851046 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.851119 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.851136 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.851164 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.851214 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.955331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.955383 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.955398 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.955419 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.955432 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.058573 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.058631 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.058648 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.058672 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.058691 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.130985 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:46:23.602733 +0000 UTC Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.162889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.162952 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.162971 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.163000 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.163021 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.179148 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:30 crc kubenswrapper[4932]: E0218 19:34:30.179368 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.266884 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.267478 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.267934 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.268469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.268935 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.372433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.372490 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.372508 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.372532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.372551 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.475828 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.475886 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.475903 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.475933 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.475955 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.521318 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/0.log" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.525123 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009" exitCode=1 Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.525242 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.526669 4932 scope.go:117] "RemoveContainer" containerID="ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.552123 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.573461 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.579434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.579530 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.579553 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.579624 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.579646 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.590580 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.611626 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.635819 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.658722 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.680225 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.683138 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.683222 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.683240 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.683273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.683291 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.698113 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.716780 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.750952 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:29Z\\\",\\\"message\\\":\\\"o/informers/factory.go:160\\\\nI0218 19:34:29.827366 6197 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:34:29.827859 6197 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827926 6197 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827978 6197 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:34:29.828543 6197 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 19:34:29.828558 6197 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:29.828579 6197 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:29.828594 6197 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:29.828604 6197 factory.go:656] Stopping watch factory\\\\nI0218 19:34:29.828615 6197 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:34:29.828628 6197 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.773086 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.785993 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.786079 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.786100 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.786133 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.786154 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.793800 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.812997 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.824696 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.895665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.895728 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.895742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.895773 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.895787 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.005528 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.005654 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.005684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.005724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.005758 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.108554 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.108617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.108627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.108662 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.108677 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.131662 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:09:59.479127916 +0000 UTC Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.179854 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.179903 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:31 crc kubenswrapper[4932]: E0218 19:34:31.180033 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:31 crc kubenswrapper[4932]: E0218 19:34:31.180144 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.211508 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.211542 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.211551 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.211564 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.211572 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.313759 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.313797 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.313810 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.313826 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.313839 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.416991 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.417025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.417033 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.417047 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.417058 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.519722 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.519755 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.519764 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.519779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.519788 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.533426 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/0.log" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.537026 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.537161 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.561441 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.585442 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.609813 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.621938 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.621962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.621970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.621984 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.622002 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.630093 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.654265 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.673808 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.689978 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.704600 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.722802 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.724763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.724824 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.724843 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.724867 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.724886 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.741409 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.763826 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.783619 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.803038 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.825299 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:29Z\\\",\\\"message\\\":\\\"o/informers/factory.go:160\\\\nI0218 19:34:29.827366 6197 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:34:29.827859 6197 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827926 6197 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827978 6197 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:34:29.828543 6197 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 19:34:29.828558 6197 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:29.828579 6197 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:29.828594 6197 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:29.828604 6197 factory.go:656] Stopping watch factory\\\\nI0218 19:34:29.828615 6197 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:34:29.828628 6197 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.827680 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.827739 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.827758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.827783 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.827804 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.931339 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.931424 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.931442 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.931484 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.931504 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.034921 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.035023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.035048 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.035080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.035106 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.132379 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:15:15.051406651 +0000 UTC Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.137842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.137897 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.137925 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.137955 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.137974 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.178767 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.178967 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.241315 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.241384 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.241411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.241508 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.241536 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.344748 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.344835 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.344859 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.344891 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.344917 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.448513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.448580 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.448599 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.448626 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.448644 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.544282 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/1.log" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.545325 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/0.log" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.550861 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4" exitCode=1 Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.550915 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.550982 4932 scope.go:117] "RemoveContainer" containerID="ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.552109 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.552149 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.552180 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.552243 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.552267 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.555670 4932 scope.go:117] "RemoveContainer" containerID="e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4" Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.556047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.578493 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.601330 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.622965 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.644349 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.655860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.655934 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.655957 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.656007 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.656025 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.680490 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:29Z\\\",\\\"message\\\":\\\"o/informers/factory.go:160\\\\nI0218 19:34:29.827366 6197 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:34:29.827859 6197 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827926 6197 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827978 6197 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:34:29.828543 6197 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 19:34:29.828558 6197 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:29.828579 6197 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:29.828594 6197 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:29.828604 6197 factory.go:656] Stopping watch factory\\\\nI0218 19:34:29.828615 6197 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:34:29.828628 6197 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.700907 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.722048 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.743792 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.758576 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.758617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.758626 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.758641 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.758653 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.760270 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.768365 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.768433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.768456 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.768487 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.768509 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.789679 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.814064 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.827932 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.827967 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.827992 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.828009 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828132 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828134 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828133 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828149 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828256 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828234 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:48.828219368 +0000 UTC m=+52.410174213 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828285 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828356 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828371 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828315 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:48.82829787 +0000 UTC m=+52.410252735 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828465 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:48.828444803 +0000 UTC m=+52.410399708 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828485 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:48.828477264 +0000 UTC m=+52.410432209 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.830424 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.834282 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.834324 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.834344 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.834361 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.834373 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.842071 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.858559 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.861255 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.864807 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.864849 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.864860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.864876 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.864889 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.871958 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.878522 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.881613 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.881643 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.881652 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.881664 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.881674 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.891437 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.894643 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.894679 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.894758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.894777 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.894787 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.928345 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.928501 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:34:48.928487468 +0000 UTC m=+52.510442313 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.936624 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.936761 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.938217 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.938249 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.938259 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.938272 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.938282 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.040782 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.040842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.040862 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.040891 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.040914 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.132732 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:50:20.475405254 +0000 UTC Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.144081 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.144143 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.144164 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.144227 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.144251 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.179297 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.179371 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:33 crc kubenswrapper[4932]: E0218 19:34:33.179505 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:33 crc kubenswrapper[4932]: E0218 19:34:33.179986 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.246801 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.246848 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.246858 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.246876 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.246891 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.349676 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.349738 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.349756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.349779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.349797 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.453352 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.453417 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.453437 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.453463 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.453481 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.555885 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.555982 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.556001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.556023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.556042 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.560936 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/1.log" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.660098 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.660218 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.660246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.660276 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.660299 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.763059 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.763480 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.763652 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.763821 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.763953 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.867622 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.868013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.868234 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.868376 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.868503 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.938346 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj"] Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.939082 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:33 crc kubenswrapper[4932]: W0218 19:34:33.942301 4932 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: secrets "ovn-control-plane-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Feb 18 19:34:33 crc kubenswrapper[4932]: W0218 19:34:33.945215 4932 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": failed to list *v1.Secret: secrets "ovn-kubernetes-control-plane-dockercfg-gs7dd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Feb 18 19:34:33 crc kubenswrapper[4932]: E0218 19:34:33.948558 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-gs7dd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-control-plane-dockercfg-gs7dd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:34:33 crc kubenswrapper[4932]: E0218 19:34:33.945322 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-control-plane-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.972010 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.972086 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.972111 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.972143 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.972168 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.979910 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:29Z\\\",\\\"message\\\":\\\"o/informers/factory.go:160\\\\nI0218 19:34:29.827366 6197 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:34:29.827859 6197 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827926 6197 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827978 6197 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:34:29.828543 6197 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 19:34:29.828558 6197 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:29.828579 6197 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:29.828594 6197 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:29.828604 6197 factory.go:656] Stopping watch factory\\\\nI0218 19:34:29.828615 6197 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:34:29.828628 6197 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.000989 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.023087 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.041327 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.044222 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8llv\" (UniqueName: \"kubernetes.io/projected/64edee2c-efed-415d-8d8e-362edad7c5bb-kube-api-access-b8llv\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.044348 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64edee2c-efed-415d-8d8e-362edad7c5bb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.044434 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64edee2c-efed-415d-8d8e-362edad7c5bb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.044482 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64edee2c-efed-415d-8d8e-362edad7c5bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.065090 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.075297 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.075360 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.075381 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.075406 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.075427 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.087086 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.106957 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.130042 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.133291 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:02:54.99425301 +0000 UTC Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.145517 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8llv\" (UniqueName: \"kubernetes.io/projected/64edee2c-efed-415d-8d8e-362edad7c5bb-kube-api-access-b8llv\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.145601 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64edee2c-efed-415d-8d8e-362edad7c5bb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.145705 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64edee2c-efed-415d-8d8e-362edad7c5bb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.145755 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64edee2c-efed-415d-8d8e-362edad7c5bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.147072 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64edee2c-efed-415d-8d8e-362edad7c5bb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.147165 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64edee2c-efed-415d-8d8e-362edad7c5bb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.154240 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.178266 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:34 crc kubenswrapper[4932]: E0218 19:34:34.178486 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.179148 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.179210 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.179231 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.179258 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.179282 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.180653 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.189453 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8llv\" (UniqueName: \"kubernetes.io/projected/64edee2c-efed-415d-8d8e-362edad7c5bb-kube-api-access-b8llv\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.205891 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.228275 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.249378 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.270643 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.283050 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.283118 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.283140 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.283171 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.283231 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.293270 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.386680 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.386784 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.386805 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.386831 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.386849 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.491151 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.491246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.491264 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.491292 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.491310 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.594321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.594390 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.594410 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.594440 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.594459 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.697504 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.697582 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.697602 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.697632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.697650 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.801303 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.801362 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.801384 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.801415 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.801439 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.905513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.905610 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.905632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.905665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.905685 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.009883 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.009961 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.009979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.010009 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.010033 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.116552 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kdjbt"] Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.117353 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.117463 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.117484 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.117514 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.117534 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.117635 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.117757 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.133744 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:59:20.118302823 +0000 UTC Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.141468 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.146549 4932 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-control-plane-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.146649 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64edee2c-efed-415d-8d8e-362edad7c5bb-ovn-control-plane-metrics-cert podName:64edee2c-efed-415d-8d8e-362edad7c5bb nodeName:}" failed. No retries permitted until 2026-02-18 19:34:35.646621288 +0000 UTC m=+39.228576163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-control-plane-metrics-cert" (UniqueName: "kubernetes.io/secret/64edee2c-efed-415d-8d8e-362edad7c5bb-ovn-control-plane-metrics-cert") pod "ovnkube-control-plane-749d76644c-bzfpj" (UID: "64edee2c-efed-415d-8d8e-362edad7c5bb") : failed to sync secret cache: timed out waiting for the condition Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.164268 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.179210 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.179299 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.179372 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.179493 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.179976 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.201151 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.217810 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.220436 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.220467 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.220483 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.220502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.220518 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.235142 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.258189 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.258276 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r9kj\" (UniqueName: \"kubernetes.io/projected/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-kube-api-access-2r9kj\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.260465 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.280570 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.305099 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.326371 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.326439 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.326462 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.326493 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.326517 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.327402 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.352724 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.359549 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r9kj\" (UniqueName: \"kubernetes.io/projected/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-kube-api-access-2r9kj\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.359739 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.359932 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.360020 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:34:35.859998639 +0000 UTC m=+39.441953514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.376692 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.377054 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.387760 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.391059 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r9kj\" (UniqueName: \"kubernetes.io/projected/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-kube-api-access-2r9kj\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.396472 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.422601 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.431267 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.431346 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.431366 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.431401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.431431 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.453109 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:29Z\\\",\\\"message\\\":\\\"o/informers/factory.go:160\\\\nI0218 19:34:29.827366 6197 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:34:29.827859 6197 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827926 6197 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827978 6197 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:34:29.828543 6197 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 19:34:29.828558 6197 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:29.828579 6197 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:29.828594 6197 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:29.828604 6197 factory.go:656] Stopping watch factory\\\\nI0218 19:34:29.828615 6197 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:34:29.828628 6197 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.473724 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.539373 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.539429 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.539443 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.539473 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.539496 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.643504 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.643556 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.643573 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.643595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.643613 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.663038 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64edee2c-efed-415d-8d8e-362edad7c5bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.668858 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64edee2c-efed-415d-8d8e-362edad7c5bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.746733 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.747312 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.747334 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.747359 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.747375 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.765350 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:35 crc kubenswrapper[4932]: W0218 19:34:35.787129 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64edee2c_efed_415d_8d8e_362edad7c5bb.slice/crio-fdd19e859e2aa8872019dc996316c4a62a9a2b4d299e6cd9167c5c3b88b2ae9f WatchSource:0}: Error finding container fdd19e859e2aa8872019dc996316c4a62a9a2b4d299e6cd9167c5c3b88b2ae9f: Status 404 returned error can't find the container with id fdd19e859e2aa8872019dc996316c4a62a9a2b4d299e6cd9167c5c3b88b2ae9f Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.853971 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.854018 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.854030 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.854048 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.854058 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.865653 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.865906 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.866009 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:34:36.865984818 +0000 UTC m=+40.447939703 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.901052 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.901897 4932 scope.go:117] "RemoveContainer" containerID="e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4" Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.902070 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.918030 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.931505 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.948130 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.958379 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.958412 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.958419 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.958448 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.958457 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.962067 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.974887 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.986550 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.997041 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.013056 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.025768 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.039260 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.054987 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.061237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.061266 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.061276 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.061290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.061300 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.067869 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.086238 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.108265 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.125753 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.134859 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:32:48.691953197 +0000 UTC Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.138345 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.164569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.164629 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.164645 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.164669 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.164685 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.179228 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:36 crc kubenswrapper[4932]: E0218 19:34:36.179403 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.266715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.266756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.266765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.266780 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.266790 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.370808 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.370888 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.370912 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.370943 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.370963 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.473621 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.473682 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.473700 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.473724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.473744 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.577231 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.577292 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.577311 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.577517 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.577539 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.582123 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" event={"ID":"64edee2c-efed-415d-8d8e-362edad7c5bb","Type":"ContainerStarted","Data":"76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.582232 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" event={"ID":"64edee2c-efed-415d-8d8e-362edad7c5bb","Type":"ContainerStarted","Data":"594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.582256 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" event={"ID":"64edee2c-efed-415d-8d8e-362edad7c5bb","Type":"ContainerStarted","Data":"fdd19e859e2aa8872019dc996316c4a62a9a2b4d299e6cd9167c5c3b88b2ae9f"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.604955 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.631486 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.650599 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.673632 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.681753 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.681825 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.681842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.681870 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.681889 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.698490 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.716762 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.741364 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.761102 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.781744 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.785071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.785210 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.785252 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.785287 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.785306 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.816888 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.836967 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.859120 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: E0218 19:34:36.878525 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.878870 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:36 crc kubenswrapper[4932]: E0218 19:34:36.879085 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:34:38.879046572 +0000 UTC m=+42.461001447 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.885710 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.890442 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.890506 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.890525 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.890551 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.890571 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.905828 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.929262 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.957242 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.994006 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.994075 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.994093 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.994120 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.994140 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.096748 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.096809 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.096828 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.096854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.096872 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.135543 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:30:52.344795943 +0000 UTC Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.178387 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:37 crc kubenswrapper[4932]: E0218 19:34:37.179495 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.179545 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.179620 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:37 crc kubenswrapper[4932]: E0218 19:34:37.179802 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:37 crc kubenswrapper[4932]: E0218 19:34:37.179990 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.193981 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.199540 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.199575 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.199609 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.199625 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.199634 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.206484 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.221669 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.247245 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.271499 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.302079 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.302969 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.303013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.303031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.303053 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.303072 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.323570 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.343269 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.370939 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.388031 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.405304 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.405356 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.405375 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.405399 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.405418 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.416351 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.432928 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.448400 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.467526 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.483087 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.495293 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.508751 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.508811 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.508822 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.508842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.508854 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.611403 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.611552 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.611572 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.611641 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.611662 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.715284 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.715379 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.715400 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.715426 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.715445 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.818590 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.818673 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.818697 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.818731 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.818755 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.922368 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.922416 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.922432 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.922449 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.922460 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.025799 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.025846 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.025861 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.025880 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.025896 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.129403 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.129478 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.129496 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.129520 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.129538 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.136409 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:03:26.615837202 +0000 UTC Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.179109 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:38 crc kubenswrapper[4932]: E0218 19:34:38.179364 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.233674 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.233747 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.233771 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.233806 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.233834 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.338624 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.338714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.338732 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.338758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.338779 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.442968 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.443047 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.443067 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.443097 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.443123 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.546585 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.546635 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.546647 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.546666 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.546680 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.650852 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.650908 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.650925 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.650949 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.650969 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.754532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.754589 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.754605 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.754630 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.754678 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.858268 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.858331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.858353 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.858383 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.858407 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.899125 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:38 crc kubenswrapper[4932]: E0218 19:34:38.899485 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:38 crc kubenswrapper[4932]: E0218 19:34:38.899594 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:34:42.899571407 +0000 UTC m=+46.481526292 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.962679 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.962762 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.962781 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.963247 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.963305 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.067444 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.067527 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.067547 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.067577 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.067598 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.137018 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:20:10.60753206 +0000 UTC Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.171078 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.171140 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.171160 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.171235 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.171262 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.178891 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.178961 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.179016 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:39 crc kubenswrapper[4932]: E0218 19:34:39.179255 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:39 crc kubenswrapper[4932]: E0218 19:34:39.179407 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:39 crc kubenswrapper[4932]: E0218 19:34:39.179681 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.274072 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.274132 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.274151 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.274217 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.274239 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.377707 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.377833 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.377856 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.377886 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.377908 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.481762 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.481824 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.481842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.481866 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.481886 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.585157 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.585343 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.585376 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.585415 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.585444 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.688965 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.689033 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.689052 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.689081 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.689100 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.792974 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.793445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.793667 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.793963 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.794243 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.898080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.898137 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.898149 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.898193 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.898207 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.001487 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.001947 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.002261 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.002564 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.002810 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.106445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.106922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.107105 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.107325 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.107516 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.137589 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:28:40.753112565 +0000 UTC Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.179229 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:40 crc kubenswrapper[4932]: E0218 19:34:40.179493 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.211202 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.211284 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.211311 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.211338 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.211356 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.314657 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.314723 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.314741 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.314767 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.314787 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.418058 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.418112 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.418130 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.418155 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.418201 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.522701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.522767 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.522785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.522813 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.522831 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.626042 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.626124 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.626145 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.626705 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.626768 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.729383 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.729428 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.729445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.729466 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.729483 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.831739 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.831796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.831814 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.831838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.831855 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.934937 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.935008 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.935060 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.935092 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.935116 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.037820 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.037858 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.037869 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.037887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.037901 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.138951 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:28:28.274894939 +0000 UTC Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.140936 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.141060 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.141147 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.141305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.141431 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.178337 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.178542 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.178704 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:41 crc kubenswrapper[4932]: E0218 19:34:41.179402 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:41 crc kubenswrapper[4932]: E0218 19:34:41.179484 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:41 crc kubenswrapper[4932]: E0218 19:34:41.179642 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.244722 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.244834 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.244882 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.244899 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.244911 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.347458 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.347524 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.347543 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.347568 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.347590 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.450545 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.450637 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.450651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.450681 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.450692 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.554001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.554087 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.554113 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.554131 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.554141 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.658093 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.658163 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.658212 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.658242 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.658261 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.761994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.762071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.762096 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.762130 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.762154 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.865307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.865374 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.865396 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.865428 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.865461 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.968431 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.968496 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.968515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.968539 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.968556 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.071569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.071630 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.071646 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.071669 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.071685 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.139035 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:17:42.525747333 +0000 UTC Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.174318 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.174364 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.174376 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.174393 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.174406 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.178749 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:42 crc kubenswrapper[4932]: E0218 19:34:42.178860 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.277796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.277865 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.277889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.277916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.277938 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.381236 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.381307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.381327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.381357 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.381380 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.484890 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.485245 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.485282 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.485318 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.485343 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.587781 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.587834 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.587849 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.587868 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.587880 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.691751 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.691815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.691833 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.691861 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.691878 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.794662 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.794730 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.794750 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.794777 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.794798 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.897252 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.897320 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.897344 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.897373 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.897400 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.946133 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:42 crc kubenswrapper[4932]: E0218 19:34:42.946420 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:42 crc kubenswrapper[4932]: E0218 19:34:42.946583 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:34:50.946528775 +0000 UTC m=+54.528483660 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.977327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.977394 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.977410 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.977433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.977450 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.001929 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.008236 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.008299 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.008315 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.008344 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.008363 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.029685 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.035755 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.035806 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.035823 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.035847 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.035868 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.057379 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.063904 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.063952 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.063970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.063995 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.064013 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.084280 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.089535 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.089614 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.089632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.089658 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.089677 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.111578 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.111904 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.114565 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.114612 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.114628 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.114651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.114670 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.140000 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:55:27.105537769 +0000 UTC Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.179106 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.179258 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.179299 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.179870 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.179960 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.179998 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.223691 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.223785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.223803 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.223827 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.223845 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.327167 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.327264 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.327281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.327308 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.327327 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.430844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.430931 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.430956 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.430985 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.431012 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.534738 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.534782 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.534796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.534814 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.534826 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.638164 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.638261 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.638281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.638305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.638322 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.741726 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.742155 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.742361 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.742513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.742660 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.845633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.845734 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.845755 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.845781 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.845801 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.949087 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.949512 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.949774 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.950024 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.950275 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.053958 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.054428 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.054658 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.054862 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.055080 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.141203 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:39:58.484594016 +0000 UTC Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.157745 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.157972 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.158204 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.158435 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.158668 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.178218 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:44 crc kubenswrapper[4932]: E0218 19:34:44.178689 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.262454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.262916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.263113 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.263378 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.263607 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.367270 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.367665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.367821 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.367962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.368091 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.470853 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.470927 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.470950 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.470977 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.470996 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.574684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.574737 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.574756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.574781 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.574800 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.678127 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.678237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.678255 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.678284 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.678304 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.780827 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.780889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.780907 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.780929 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.780947 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.884148 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.884248 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.884272 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.884307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.884329 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.986844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.986905 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.986922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.986949 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.986969 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.090321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.090738 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.090885 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.091025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.091264 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.142290 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:50:35.278569312 +0000 UTC Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.178503 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.178577 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.178763 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:45 crc kubenswrapper[4932]: E0218 19:34:45.178943 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:45 crc kubenswrapper[4932]: E0218 19:34:45.179107 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:45 crc kubenswrapper[4932]: E0218 19:34:45.179451 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.194364 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.194414 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.194431 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.194454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.194472 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.298348 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.298413 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.298434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.298462 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.298483 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.401430 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.401501 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.401520 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.401544 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.401565 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.504718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.504767 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.504784 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.504806 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.504821 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.607398 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.607483 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.607509 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.608139 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.608241 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.745931 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.746031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.746050 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.746076 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.746096 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.849228 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.849286 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.849304 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.849328 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.849348 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.951774 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.951881 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.951899 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.951923 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.951942 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.054573 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.054653 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.054671 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.054701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.054719 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.143830 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:38:03.793456446 +0000 UTC Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.157253 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.157320 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.157340 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.157366 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.157384 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.178825 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:46 crc kubenswrapper[4932]: E0218 19:34:46.179002 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.260691 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.260747 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.260764 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.260788 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.260806 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.364210 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.364345 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.364381 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.364451 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.364474 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.467037 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.467137 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.467166 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.467255 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.467276 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.573676 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.573819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.573902 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.573979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.574006 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.677416 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.677461 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.677479 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.677504 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.677524 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.779898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.779969 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.779994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.780021 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.780043 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.883355 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.883412 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.883433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.883460 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.883483 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.986757 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.986797 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.986809 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.986827 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.986840 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.089808 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.089872 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.089898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.089928 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.089949 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.154441 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:06:36.297914774 +0000 UTC Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.178993 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.179122 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.179249 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:47 crc kubenswrapper[4932]: E0218 19:34:47.179371 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:47 crc kubenswrapper[4932]: E0218 19:34:47.179150 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:47 crc kubenswrapper[4932]: E0218 19:34:47.179702 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.191889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.191940 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.191952 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.191973 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.191985 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.202416 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.218100 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.240280 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.251703 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.275902 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.292026 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.294302 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.294372 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.294396 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.294426 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.294447 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.310160 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.330374 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.346368 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.368726 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.385479 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.396844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.396944 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.396970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.397004 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.397027 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.412058 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.429725 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.448686 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.466210 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.481828 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.500559 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.500611 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.500623 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.500644 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.500657 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.603941 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.603989 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.604004 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.604028 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.604045 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.706351 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.706407 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.706427 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.706451 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.706467 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.808702 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.808764 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.808786 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.808809 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.808826 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.911817 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.911879 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.911893 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.911912 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.911927 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.950444 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.962345 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.977513 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.994790 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.014033 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.015778 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.016099 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.016550 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.016588 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.016603 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.027883 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.040929 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.056787 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.078620 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.097004 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.117922 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.120806 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.121025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.121232 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.121415 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.121574 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.132576 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.150571 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.155156 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:12:31.388305587 +0000 UTC Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.178675 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.178901 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.180837 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.194849 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.212710 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.224384 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.224445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.224467 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.224494 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.224511 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.226905 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.242554 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.327469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.327502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.327513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.327529 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.327541 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.434152 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.435017 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.435038 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.435067 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.435086 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.537746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.538411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.538621 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.538789 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.538944 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.641245 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.641276 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.641284 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.641300 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.641309 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.744480 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.744547 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.744569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.744594 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.744612 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.846810 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.846872 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.846890 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.846915 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.846933 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.914738 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.914815 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.914882 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.914904 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.914921 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915022 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:35:20.914992937 +0000 UTC m=+84.496947822 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915054 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915122 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915164 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915214 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915167 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:35:20.91513678 +0000 UTC m=+84.497091685 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915281 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915319 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:35:20.915293293 +0000 UTC m=+84.497248178 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915334 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915353 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915437 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:35:20.915407556 +0000 UTC m=+84.497362411 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.950513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.950612 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.950639 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.950671 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.950695 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.015995 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:49 crc kubenswrapper[4932]: E0218 19:34:49.016273 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:35:21.016233358 +0000 UTC m=+84.598188243 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.053115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.053454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.053479 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.053497 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.053513 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.155771 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:37:00.074805985 +0000 UTC Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.156286 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.156340 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.156363 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.156397 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.156420 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.178933 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.178979 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:49 crc kubenswrapper[4932]: E0218 19:34:49.179141 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.179279 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:49 crc kubenswrapper[4932]: E0218 19:34:49.179330 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:49 crc kubenswrapper[4932]: E0218 19:34:49.179517 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.259231 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.259288 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.259305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.259328 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.259346 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.362503 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.362595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.362619 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.362651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.362673 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.466002 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.466102 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.466123 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.466150 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.466200 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.568716 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.568809 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.568835 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.568867 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.568888 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.671241 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.671292 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.671306 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.671322 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.671335 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.773820 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.773887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.773908 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.773932 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.773954 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.877350 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.877424 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.877453 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.877481 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.877502 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.980896 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.980992 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.981013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.981096 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.981155 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.083588 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.083675 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.083693 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.083713 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.083729 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.156253 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:31:11.709480076 +0000 UTC Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.178670 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:50 crc kubenswrapper[4932]: E0218 19:34:50.178805 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.180002 4932 scope.go:117] "RemoveContainer" containerID="e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.186045 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.186085 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.186101 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.186118 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.186132 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.289145 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.289423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.289436 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.289453 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.289467 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.391779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.391835 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.391850 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.391867 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.391934 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.494789 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.494874 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.494893 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.494916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.494933 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.597936 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.598009 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.598033 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.598065 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.598086 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.648400 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/1.log" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.652257 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.653552 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.675419 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.700861 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.700965 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.700991 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.701023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.701049 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.708748 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.732226 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.749007 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.781343 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.797377 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.803707 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.803748 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.803761 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.803780 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.803792 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.813753 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.831443 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.845628 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.857423 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.879989 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.893723 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.905850 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.905879 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.905887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.905900 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.905909 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.912812 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.925723 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.943579 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.961698 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.981628 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.008859 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.008887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.008899 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.008911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.008921 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.040082 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:51 crc kubenswrapper[4932]: E0218 19:34:51.040282 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:51 crc kubenswrapper[4932]: E0218 19:34:51.040352 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:35:07.04033395 +0000 UTC m=+70.622288815 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.111709 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.111776 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.111789 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.111810 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.111824 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.157051 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 08:32:24.484225764 +0000 UTC Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.178488 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.178611 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.178700 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:51 crc kubenswrapper[4932]: E0218 19:34:51.178634 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:51 crc kubenswrapper[4932]: E0218 19:34:51.178814 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:51 crc kubenswrapper[4932]: E0218 19:34:51.178933 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.215339 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.215406 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.215423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.215449 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.215468 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.317947 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.317991 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.318002 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.318019 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.318033 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.421052 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.421131 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.421150 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.421200 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.421216 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.524651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.524726 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.524754 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.524785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.524807 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.628949 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.629028 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.629052 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.629080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.629099 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.658525 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/2.log" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.659446 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/1.log" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.663131 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a" exitCode=1 Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.663232 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.663284 4932 scope.go:117] "RemoveContainer" containerID="e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.664157 4932 scope.go:117] "RemoveContainer" containerID="2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a" Feb 18 19:34:51 crc kubenswrapper[4932]: E0218 19:34:51.664425 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.687120 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.706614 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.730423 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.732627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.732669 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.732687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.732710 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.732727 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.752743 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.772354 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.797487 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.821130 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.836258 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.836327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.836346 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.836373 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.836396 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.843585 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.860200 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.873493 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.889841 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.907971 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.926722 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.939883 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.939954 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.939980 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.940011 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.940031 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.959585 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.975389 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.994545 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.011917 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.043110 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.043206 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.043224 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.043246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.043264 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.147739 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.147789 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.147801 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.147823 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.147840 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.157539 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 02:53:12.629093134 +0000 UTC Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.179002 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:52 crc kubenswrapper[4932]: E0218 19:34:52.179281 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.251079 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.251441 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.251531 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.251654 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.251928 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.355661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.355739 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.355763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.355793 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.355820 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.459938 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.460420 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.460589 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.460771 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.460912 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.564579 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.564634 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.564645 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.564663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.564675 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.667063 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.667129 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.667148 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.667203 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.667222 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.670897 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/2.log" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.675808 4932 scope.go:117] "RemoveContainer" containerID="2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a" Feb 18 19:34:52 crc kubenswrapper[4932]: E0218 19:34:52.676116 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.699367 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.718211 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.737313 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.757351 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.770381 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.770718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.770910 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.771071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.771260 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.777982 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.798235 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.825536 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.846697 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.872424 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.874549 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.874611 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.874633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.874664 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.874687 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.895112 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.916392 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.938996 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.956369 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.976602 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.978864 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.979119 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.979341 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.979754 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.979939 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.002219 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.017951 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.042659 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.083086 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.083461 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.083587 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.083689 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.083786 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.158123 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:01:13.019631869 +0000 UTC Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.178904 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.179000 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.179010 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.179078 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.179256 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.179495 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.186080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.186117 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.186128 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.186142 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.186155 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.187477 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.187546 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.187572 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.187606 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.187630 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.210037 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.215025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.215259 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.215446 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.215595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.215722 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.235562 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.240420 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.240724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.240957 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.241111 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.241330 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.261670 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.266613 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.266661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.266674 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.266690 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.266702 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.288441 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.293404 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.293790 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.294241 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.294602 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.294947 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.314549 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.314773 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.317327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.317368 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.317386 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.317410 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.317429 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.420776 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.420840 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.420857 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.420881 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.420899 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.523994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.524085 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.524112 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.524144 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.524168 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.627777 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.627877 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.627911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.627937 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.627955 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.732087 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.732161 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.732241 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.732339 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.732364 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.835694 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.835742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.835758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.835780 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.835799 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.938937 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.939048 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.939066 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.939090 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.939107 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.042978 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.043059 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.043083 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.043113 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.043144 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.146855 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.146934 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.146950 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.146976 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.146995 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.159597 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:46:30.673445297 +0000 UTC Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.178304 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:54 crc kubenswrapper[4932]: E0218 19:34:54.178486 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.250351 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.250422 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.250439 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.250465 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.250489 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.353464 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.353520 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.353536 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.353559 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.353574 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.456107 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.456197 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.456217 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.456246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.456266 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.559974 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.560061 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.560080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.560111 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.560130 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.663348 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.663409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.663428 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.663454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.663476 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.766789 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.766848 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.766866 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.766892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.766910 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.870573 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.870655 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.870679 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.870706 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.870724 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.973926 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.973998 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.974020 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.974048 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.974070 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.077554 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.077617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.077634 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.077659 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.077677 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.159698 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:32:28.914349544 +0000 UTC Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.178498 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.178579 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.178589 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:55 crc kubenswrapper[4932]: E0218 19:34:55.178690 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:55 crc kubenswrapper[4932]: E0218 19:34:55.178856 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:55 crc kubenswrapper[4932]: E0218 19:34:55.178990 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.180543 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.180596 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.180614 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.180639 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.180657 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.283069 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.283137 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.283155 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.283212 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.283237 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.386803 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.386860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.386882 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.386908 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.386925 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.489838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.489983 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.490001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.490031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.490052 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.593000 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.593066 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.593229 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.593260 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.593280 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.696326 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.696392 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.696409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.696435 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.696453 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.845056 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.845111 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.845129 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.845152 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.845170 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.948230 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.948283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.948300 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.948321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.948336 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.052104 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.052246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.052273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.052305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.052344 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.155575 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.155689 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.155715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.155749 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.155774 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.160865 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:06:06.537281537 +0000 UTC Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.178529 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:56 crc kubenswrapper[4932]: E0218 19:34:56.178725 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.258951 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.259022 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.259047 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.259074 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.259096 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.361877 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.361952 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.361969 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.361991 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.362018 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.465267 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.465330 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.465352 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.465376 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.465392 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.568434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.568477 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.568493 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.568513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.568530 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.670583 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.670629 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.670647 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.670666 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.670681 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.773802 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.773867 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.773885 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.773918 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.773937 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.876855 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.876912 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.876928 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.876951 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.876966 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.980072 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.980164 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.980239 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.980281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.980306 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.082876 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.082969 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.082987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.083012 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.083028 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.161124 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 06:35:30.943459628 +0000 UTC Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.178995 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:57 crc kubenswrapper[4932]: E0218 19:34:57.179128 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.179386 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.179511 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:57 crc kubenswrapper[4932]: E0218 19:34:57.179620 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:57 crc kubenswrapper[4932]: E0218 19:34:57.179757 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.186353 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.186408 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.186426 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.186450 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.186468 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.200573 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.218553 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.237200 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.255154 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.273901 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.289496 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.289767 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.289970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.290128 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.290336 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.292526 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.312380 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.331132 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.345890 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.357854 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.368788 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.381515 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.394514 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.394559 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.394570 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.394586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.394600 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.401988 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.415690 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.432865 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.450393 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.467124 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.501133 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.501218 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.501237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.501256 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.501269 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.608409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.608788 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.608892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.609005 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.609107 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.711548 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.711699 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.711731 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.711779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.711807 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.814898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.814963 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.814981 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.815005 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.815023 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.917838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.917919 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.917941 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.917972 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.917990 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.021562 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.021625 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.021643 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.021666 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.021684 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.124697 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.124764 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.124787 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.124817 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.124839 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.161528 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 13:10:43.680088932 +0000 UTC Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.178220 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:58 crc kubenswrapper[4932]: E0218 19:34:58.178413 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.227831 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.227898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.227915 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.227940 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.227961 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.331397 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.331489 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.331514 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.331545 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.331570 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.435112 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.435257 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.435275 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.435299 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.435317 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.538646 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.538711 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.538733 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.538763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.538783 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.642101 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.642198 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.642219 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.642246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.642301 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.745008 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.745090 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.745117 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.745149 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.745170 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.848478 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.848847 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.848987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.849165 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.849332 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.952898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.953418 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.953619 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.953863 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.954044 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.057730 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.057792 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.057812 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.057846 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.057887 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.161229 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.161356 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.161381 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.161597 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.161628 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.161864 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:36:58.566752194 +0000 UTC Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.178337 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:59 crc kubenswrapper[4932]: E0218 19:34:59.178518 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.178801 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:59 crc kubenswrapper[4932]: E0218 19:34:59.178907 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.179367 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:59 crc kubenswrapper[4932]: E0218 19:34:59.179477 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.264844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.264896 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.264916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.264940 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.264958 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.368721 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.368777 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.368794 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.368818 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.368836 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.472108 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.472192 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.472221 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.472265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.472290 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.575574 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.575639 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.575661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.575687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.575704 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.678225 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.678265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.678283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.678305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.678322 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.781791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.781847 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.781863 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.781887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.781904 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.884638 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.884693 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.884711 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.884734 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.884751 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.989049 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.989246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.989278 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.989321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.989344 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.094068 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.094149 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.094170 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.094224 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.094253 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.162918 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 10:35:47.360175631 +0000 UTC Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.178890 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:00 crc kubenswrapper[4932]: E0218 19:35:00.180047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.197493 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.197563 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.197582 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.197608 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.197628 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.301252 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.301290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.301301 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.301317 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.301327 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.404332 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.404411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.404430 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.404480 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.404498 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.507913 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.508321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.508468 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.508599 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.508742 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.611980 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.612060 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.612083 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.612115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.612137 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.717637 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.717735 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.717765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.717800 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.717827 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.820922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.821246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.821448 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.821665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.821839 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.925136 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.925242 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.925270 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.925300 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.925323 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.027844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.027903 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.027921 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.027944 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.027962 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.130792 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.130826 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.130835 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.130850 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.130859 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.164288 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:51:06.271774056 +0000 UTC Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.178897 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.178937 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:01 crc kubenswrapper[4932]: E0218 19:35:01.179314 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.178967 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:01 crc kubenswrapper[4932]: E0218 19:35:01.179612 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:01 crc kubenswrapper[4932]: E0218 19:35:01.179750 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.233402 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.233440 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.233450 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.233464 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.233477 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.335783 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.335815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.335826 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.335842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.335853 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.438819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.438866 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.438878 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.438897 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.438911 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.541896 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.541937 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.541947 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.541965 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.541978 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.644841 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.644918 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.644932 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.644951 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.644963 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.747569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.747614 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.747627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.747644 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.747656 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.852476 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.852540 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.852558 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.852583 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.852601 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.955014 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.955056 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.955064 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.955080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.955089 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.057542 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.057578 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.057586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.057599 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.057611 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.160616 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.160661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.160696 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.160715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.160727 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.165023 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:34:22.016490115 +0000 UTC Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.178327 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:02 crc kubenswrapper[4932]: E0218 19:35:02.178446 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.263804 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.263857 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.263874 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.263898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.263916 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.365574 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.365609 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.365616 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.365630 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.365639 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.467921 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.467975 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.467992 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.468016 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.468033 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.570633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.570679 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.570691 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.570706 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.570720 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.673248 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.673281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.673294 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.673308 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.673319 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.775627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.775710 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.775735 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.775765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.775788 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.878500 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.878571 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.878591 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.878635 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.878664 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.982130 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.982515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.982694 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.982874 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.983053 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.086302 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.086344 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.086353 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.086369 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.086381 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.165617 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 00:29:32.877209175 +0000 UTC Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.179450 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.179534 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.179643 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.179669 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.179729 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.179913 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.188691 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.188739 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.188756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.188779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.188795 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.290813 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.290856 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.290867 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.290885 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.290896 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.392889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.392961 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.392984 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.393012 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.393033 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.494701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.494734 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.494742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.494758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.494769 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.495635 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.495832 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.496071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.496353 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.496392 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.518862 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.522885 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.522914 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.522921 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.522933 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.522941 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.536050 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.539863 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.539897 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.539909 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.539923 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.539935 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.554272 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.557828 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.557878 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.557895 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.557919 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.557936 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.570291 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.574147 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.574220 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.574237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.574260 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.574278 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.588857 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.589072 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.597461 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.597492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.597503 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.597520 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.597531 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.700136 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.700207 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.700234 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.700254 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.700269 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.802241 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.802296 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.802312 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.802335 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.802353 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.904203 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.904242 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.904253 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.904270 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.904295 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.006822 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.006870 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.006886 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.006907 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.006923 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.109390 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.109424 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.109434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.109449 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.109459 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.166634 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:28:14.949291408 +0000 UTC Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.178374 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:04 crc kubenswrapper[4932]: E0218 19:35:04.178464 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.179465 4932 scope.go:117] "RemoveContainer" containerID="2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a" Feb 18 19:35:04 crc kubenswrapper[4932]: E0218 19:35:04.179699 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.211133 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.211223 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.211247 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.211273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.211289 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.313833 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.313889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.313905 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.313928 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.313946 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.416249 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.416279 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.416286 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.416301 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.416311 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.518863 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.519001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.519059 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.519120 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.519204 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.620834 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.620869 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.620881 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.620895 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.620905 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.722822 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.722847 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.722857 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.722870 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.722880 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.827933 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.827989 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.828013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.828041 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.828064 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.930115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.930149 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.930162 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.930177 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.930196 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.031758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.031793 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.031803 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.031820 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.031828 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.133469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.133508 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.133515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.133527 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.133536 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.167091 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:58:31.666527356 +0000 UTC Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.178572 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:05 crc kubenswrapper[4932]: E0218 19:35:05.178660 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.178700 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:05 crc kubenswrapper[4932]: E0218 19:35:05.178748 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.178773 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:05 crc kubenswrapper[4932]: E0218 19:35:05.178809 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.235776 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.235866 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.235889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.235926 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.235949 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.338079 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.338142 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.338154 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.338181 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.338193 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.440547 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.440585 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.440596 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.440609 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.440619 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.542839 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.542896 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.542909 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.542923 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.542935 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.645505 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.645539 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.645548 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.645560 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.645570 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.747854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.747881 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.747891 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.747903 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.747911 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.850236 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.850266 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.850273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.850286 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.850295 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.952031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.952058 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.952066 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.952077 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.952087 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.053893 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.053945 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.053953 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.053967 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.053976 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.156040 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.156108 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.156117 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.156131 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.156546 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.167298 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:50:23.895432682 +0000 UTC Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.178764 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:06 crc kubenswrapper[4932]: E0218 19:35:06.178932 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.259285 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.259359 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.259377 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.259413 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.259436 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.362271 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.362310 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.362318 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.362331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.362339 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.464355 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.464417 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.464434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.464459 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.464475 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.567217 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.567265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.567277 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.567292 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.567304 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.669621 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.669894 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.669961 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.670024 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.670088 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.772529 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.772819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.772925 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.773026 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.773167 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.876680 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.876996 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.877130 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.877288 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.877422 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.980926 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.980991 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.981040 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.981065 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.981082 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.083543 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.083791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.083884 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.083953 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.084018 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.104621 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:07 crc kubenswrapper[4932]: E0218 19:35:07.104931 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:35:07 crc kubenswrapper[4932]: E0218 19:35:07.105087 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:35:39.105061717 +0000 UTC m=+102.687016642 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.168306 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:04:28.541863818 +0000 UTC Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.178673 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.178806 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:07 crc kubenswrapper[4932]: E0218 19:35:07.179187 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.179251 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:07 crc kubenswrapper[4932]: E0218 19:35:07.179464 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:07 crc kubenswrapper[4932]: E0218 19:35:07.179341 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.185564 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.185603 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.185616 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.185632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.185645 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.192886 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.207968 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.225327 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.238003 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.254009 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.265225 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.278838 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.286974 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.287818 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.287860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.287873 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.287889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.287901 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.296926 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.307385 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.315639 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.324466 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.335515 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.345198 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.356816 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.367399 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.379994 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.391372 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.391411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.391420 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.391436 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.391446 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.494637 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.494677 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.494691 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.494708 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.494720 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.597309 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.597377 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.597395 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.597419 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.597438 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.700838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.701230 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.701508 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.701772 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.701980 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.805816 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.806221 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.806354 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.806650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.806790 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.910559 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.910662 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.910693 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.910723 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.910813 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.014127 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.014201 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.014211 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.014231 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.014244 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.117577 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.117644 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.117665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.117692 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.117711 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.169531 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:32:39.753602732 +0000 UTC Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.178196 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:08 crc kubenswrapper[4932]: E0218 19:35:08.178410 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.221248 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.221290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.221303 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.221326 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.221342 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.323780 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.323824 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.323836 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.323857 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.323872 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.427386 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.427443 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.427462 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.427488 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.427509 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.533491 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.533740 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.533911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.534114 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.534374 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.638402 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.638434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.638446 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.638466 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.638478 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.734777 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/0.log" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.734832 4932 generic.go:334] "Generic (PLEG): container finished" podID="1b8d80e2-307e-43b6-9003-e77eef51e084" containerID="e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7" exitCode=1 Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.734868 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerDied","Data":"e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.735331 4932 scope.go:117] "RemoveContainer" containerID="e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.745071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.745119 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.745137 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.745160 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.745198 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.757583 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.778350 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.803391 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.825358 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.848409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.848556 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.848573 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.848600 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.848616 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.850505 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.867095 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.879662 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.893878 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.907945 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.922980 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.934483 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.950618 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.952259 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.952307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.952322 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.952342 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.952355 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.963504 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.979035 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.997529 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.012835 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.029804 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.054457 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.054512 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.054524 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.054538 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.054549 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.156525 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.156561 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.156569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.156581 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.156591 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.170711 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 17:54:05.489085946 +0000 UTC Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.179032 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.179075 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.179082 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:09 crc kubenswrapper[4932]: E0218 19:35:09.179149 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:09 crc kubenswrapper[4932]: E0218 19:35:09.179217 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:09 crc kubenswrapper[4932]: E0218 19:35:09.179310 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.259342 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.259398 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.259410 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.259426 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.259460 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.362312 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.362347 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.362357 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.362374 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.362384 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.464594 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.464638 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.464650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.464668 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.464682 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.567408 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.567461 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.567473 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.567491 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.567504 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.670609 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.670654 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.670668 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.670687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.670700 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.739152 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/0.log" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.739257 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerStarted","Data":"3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.759948 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.773618 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.773695 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.773718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.773748 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.773771 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.779861 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.798067 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.809398 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.822191 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.833556 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.844995 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.856092 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.877051 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.877311 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.877351 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.877360 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.877376 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.877386 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.892644 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.911959 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.932247 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.945909 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.957411 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.974890 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.979465 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.979492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.979502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.979518 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.979528 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.996254 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.007709 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.081796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.082010 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.082029 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.082047 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.082060 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.170959 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:23:41.708676632 +0000 UTC Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.178320 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:10 crc kubenswrapper[4932]: E0218 19:35:10.178466 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.184209 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.184245 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.184255 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.184269 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.184278 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.286872 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.286908 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.286920 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.286937 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.286950 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.389819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.389880 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.389898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.389924 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.389941 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.493485 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.493569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.493593 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.493622 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.493642 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.596361 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.596439 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.596458 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.596483 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.596503 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.700270 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.700348 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.700368 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.700397 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.700415 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.803275 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.803367 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.803389 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.803417 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.803435 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.907046 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.907133 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.907144 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.907167 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.907201 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.010593 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.010683 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.010708 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.010738 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.010762 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.113403 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.113458 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.113475 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.113501 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.113520 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.171144 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 12:13:59.361954623 +0000 UTC Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.178502 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.178654 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:11 crc kubenswrapper[4932]: E0218 19:35:11.178671 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:11 crc kubenswrapper[4932]: E0218 19:35:11.178946 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.179273 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:11 crc kubenswrapper[4932]: E0218 19:35:11.179611 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.216469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.216521 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.216531 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.216549 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.216564 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.319678 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.319720 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.319729 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.319746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.319756 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.422934 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.423290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.423433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.423583 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.423719 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.527246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.528126 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.528392 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.528606 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.528800 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.631089 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.631157 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.631215 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.631241 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.631261 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.734007 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.734372 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.734504 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.734664 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.734786 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.837843 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.838156 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.838358 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.838542 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.838716 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.943842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.943921 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.943947 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.943977 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.943999 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.048056 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.048500 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.048644 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.048788 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.048960 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.153307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.153363 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.153382 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.153404 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.153421 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.171945 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:47:46.453309589 +0000 UTC Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.178353 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:12 crc kubenswrapper[4932]: E0218 19:35:12.178680 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.257150 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.257289 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.257307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.257331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.257348 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.359928 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.359978 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.359993 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.360012 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.360399 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.464650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.464721 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.464740 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.464764 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.464784 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.567643 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.567704 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.567724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.567749 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.567769 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.670746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.670820 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.670838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.670868 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.670893 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.774436 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.774517 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.774535 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.774587 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.774607 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.877497 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.877556 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.877574 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.877597 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.877614 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.980638 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.980702 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.980718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.980741 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.980758 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.083725 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.083787 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.083805 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.083855 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.083874 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.172966 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 16:57:30.269729292 +0000 UTC Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.178655 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.178727 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.178893 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.178995 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.179455 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.179646 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.187633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.187675 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.187683 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.187701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.187716 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.198344 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.291454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.291517 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.291533 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.291556 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.291574 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.394902 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.394967 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.394983 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.395005 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.395022 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.497566 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.497620 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.497647 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.497675 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.497697 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.600650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.600733 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.600761 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.600790 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.600814 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.703248 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.703340 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.703358 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.703380 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.703398 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.763372 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.763800 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.764016 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.764252 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.764450 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.788923 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.795261 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.795331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.795356 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.795387 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.795409 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.816070 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.820779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.820983 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.821111 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.821312 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.821456 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.841313 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.846501 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.846562 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.846579 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.846605 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.846623 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.866549 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.871465 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.871532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.871555 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.871583 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.871607 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.892463 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.892691 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.894664 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.894714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.894731 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.894756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.894773 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.997522 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.997585 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.997603 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.997628 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.997648 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.100578 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.100632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.100650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.100672 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.100689 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.173353 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:31:42.97760435 +0000 UTC Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.178989 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:14 crc kubenswrapper[4932]: E0218 19:35:14.179229 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.204668 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.204731 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.204752 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.204773 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.204791 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.307807 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.307848 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.307858 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.307873 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.307884 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.411031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.411077 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.411095 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.411119 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.411135 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.514829 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.514906 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.514924 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.514949 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.514967 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.617750 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.617803 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.617819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.617843 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.617862 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.721363 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.721425 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.721443 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.721467 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.721483 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.824506 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.824582 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.824606 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.824640 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.824664 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.928159 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.928244 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.928264 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.928290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.928308 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.032507 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.032583 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.032599 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.032619 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.032631 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.135659 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.135711 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.135728 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.135750 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.135766 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.174243 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:35:23.752699865 +0000 UTC Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.178696 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.178730 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.178873 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:15 crc kubenswrapper[4932]: E0218 19:35:15.179086 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:15 crc kubenswrapper[4932]: E0218 19:35:15.179300 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:15 crc kubenswrapper[4932]: E0218 19:35:15.179417 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.239066 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.239114 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.239125 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.239143 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.239156 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.342653 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.342715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.342738 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.342767 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.342790 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.445757 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.445823 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.445840 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.445864 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.445881 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.549297 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.549360 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.549380 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.549409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.549433 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.651947 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.652031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.652053 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.652078 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.652096 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.755665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.755725 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.755744 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.755767 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.755784 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.859107 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.859208 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.859235 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.859262 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.859284 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.962892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.962969 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.962990 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.963018 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.963040 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.065629 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.065694 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.065711 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.065737 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.065755 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.168926 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.168967 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.168979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.168997 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.169009 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.175224 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:05:07.642692221 +0000 UTC Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.179215 4932 scope.go:117] "RemoveContainer" containerID="2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.179495 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:16 crc kubenswrapper[4932]: E0218 19:35:16.179551 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.271815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.271916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.271955 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.271977 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.272016 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.374221 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.374264 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.374278 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.374296 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.374310 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.478050 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.478107 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.478124 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.478147 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.478164 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.581612 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.581671 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.581689 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.581714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.581737 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.683997 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.684051 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.684069 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.684093 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.684110 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.767320 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/2.log" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.770066 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.770557 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.786117 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.786194 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.786206 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.786223 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.786237 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.787666 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.804330 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.821508 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.839783 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.857067 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6b1ceeb-ed25-4345-a294-674238130833\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd80983bc05658f4dacedf042b5c669290255dd503bccbc9164ad48e35e7d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.873437 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.889550 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.889607 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.889624 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.889655 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.889675 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.899944 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.916768 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.935059 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.952584 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.972836 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.988290 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.992736 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.992779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.992798 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.992824 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.992841 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.008588 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.026082 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.040413 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.057269 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.079605 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.095400 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.095451 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.095462 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.095479 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.095553 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.105786 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.175673 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:05:14.518070468 +0000 UTC Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.179136 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.179207 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:17 crc kubenswrapper[4932]: E0218 19:35:17.179316 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:17 crc kubenswrapper[4932]: E0218 19:35:17.179503 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.179544 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:17 crc kubenswrapper[4932]: E0218 19:35:17.179627 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.196062 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.198682 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.198746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.198791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.198815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.198827 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.214543 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6b1ceeb-ed25-4345-a294-674238130833\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd80983bc05658f4dacedf042b5c669290255dd503bccbc9164ad48e35e7d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.230885 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.300975 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.301018 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.301038 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.301054 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.301065 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.303589 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.315571 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.327665 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.343051 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.358423 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.368276 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.379169 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.389522 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.398845 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.403182 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.403223 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.403235 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.403250 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.403262 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.408674 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.421680 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.433743 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.446144 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.461332 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.477566 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.505724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.505765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.505773 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.505787 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.505796 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.608772 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.609158 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.609214 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.609248 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.609268 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.713549 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.713620 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.713640 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.713667 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.713686 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.780962 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/3.log" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.782654 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/2.log" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.786330 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" exitCode=1 Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.786377 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.786423 4932 scope.go:117] "RemoveContainer" containerID="2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.788824 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:35:17 crc kubenswrapper[4932]: E0218 19:35:17.789122 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.805625 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.815886 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.815922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.815934 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.815955 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.815967 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.819414 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.840131 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.859709 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.877307 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6b1ceeb-ed25-4345-a294-674238130833\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd80983bc05658f4dacedf042b5c669290255dd503bccbc9164ad48e35e7d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.896125 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.919516 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.919582 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.919595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.919614 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.919629 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.930934 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:17Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0218 19:35:17.178478 6969 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178611 6969 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178682 6969 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:35:17.178776 6969 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178928 6969 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.179579 6969 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:35:17.179590 6969 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:35:17.179618 6969 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:35:17.179632 6969 factory.go:656] Stopping watch factory\\\\nI0218 19:35:17.179640 6969 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:35:17.179653 6969 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:35:17.179681 6969 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.947048 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.967528 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.988361 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.005504 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.020999 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.023151 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.023225 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.023244 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.023269 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.023287 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.043236 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.057892 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.073674 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.091547 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.110855 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.126622 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.126688 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.126707 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.126733 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.126750 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.130394 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.176293 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:23:17.053872213 +0000 UTC Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.178689 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:18 crc kubenswrapper[4932]: E0218 19:35:18.178873 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.229883 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.229946 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.229974 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.230005 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.230030 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.333283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.333349 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.333368 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.333393 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.333411 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.436169 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.436262 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.436281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.436306 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.436324 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.540333 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.540416 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.540438 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.540473 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.540492 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.644447 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.644537 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.644557 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.644595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.644618 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.748278 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.748361 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.748382 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.748415 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.748436 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.792986 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/3.log" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.798328 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:35:18 crc kubenswrapper[4932]: E0218 19:35:18.798665 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.822639 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.845375 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.856725 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.856892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.856977 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.857018 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.857097 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.872999 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.894068 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.910612 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6b1ceeb-ed25-4345-a294-674238130833\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd80983bc05658f4dacedf042b5c669290255dd503bccbc9164ad48e35e7d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.933245 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.960834 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.960892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.960911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.960935 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.960953 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.966798 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:17Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0218 19:35:17.178478 6969 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178611 6969 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178682 6969 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:35:17.178776 6969 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178928 6969 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.179579 6969 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:35:17.179590 6969 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:35:17.179618 6969 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:35:17.179632 6969 factory.go:656] Stopping watch factory\\\\nI0218 19:35:17.179640 6969 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:35:17.179653 6969 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:35:17.179681 6969 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:35:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.985427 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.007139 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.028926 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.050915 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.064407 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.064471 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.064490 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.064517 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.064579 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.067564 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.087748 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.107002 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.123845 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.140344 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.164563 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.168592 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.168663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.168688 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.168720 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.168744 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.176514 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 20:10:47.539530886 +0000 UTC Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.178917 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:19 crc kubenswrapper[4932]: E0218 19:35:19.179079 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.179462 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.179585 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:19 crc kubenswrapper[4932]: E0218 19:35:19.179720 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:19 crc kubenswrapper[4932]: E0218 19:35:19.180035 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.184741 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.272089 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.272135 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.272153 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.272208 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.272225 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.375401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.375454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.375472 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.375497 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.375520 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.478718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.478802 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.478819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.478844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.478862 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.582143 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.582270 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.582298 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.582330 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.582354 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.684899 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.684977 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.685002 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.685033 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.685058 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.788233 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.788290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.788305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.788327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.788343 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.892282 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.892355 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.892378 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.892409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.892433 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.995818 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.995868 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.995883 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.995905 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.995919 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.098617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.098684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.098701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.098726 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.098744 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.177219 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:28:58.328890301 +0000 UTC Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.178441 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.178608 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.202313 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.202380 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.202401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.202427 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.202445 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.305301 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.305368 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.305385 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.305408 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.305426 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.408671 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.408840 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.408859 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.408887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.408904 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.513010 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.513168 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.513218 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.513244 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.513270 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.616523 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.616576 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.616594 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.616617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.616635 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.719287 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.719356 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.719373 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.719395 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.719411 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.821578 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.821627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.821644 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.821666 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.821683 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.925045 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.925108 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.925125 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.925148 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.925166 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.971607 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.971676 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.971748 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.971792 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.971814 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.971910 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.971883134 +0000 UTC m=+148.553838019 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.971937 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.971966 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.971973 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.972066 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.972096 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.971995 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.972136 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.972008 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.971987326 +0000 UTC m=+148.553942181 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.972293 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.972265423 +0000 UTC m=+148.554220458 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.972939 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.972910497 +0000 UTC m=+148.554865532 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.027962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.028004 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.028020 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.028042 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.028059 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.072151 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:35:21 crc kubenswrapper[4932]: E0218 19:35:21.072373 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.072344283 +0000 UTC m=+148.654299158 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.131330 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.131399 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.131423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.131463 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.131486 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.177557 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 10:46:02.279422451 +0000 UTC Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.180289 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.180334 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.180371 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:21 crc kubenswrapper[4932]: E0218 19:35:21.180480 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:21 crc kubenswrapper[4932]: E0218 19:35:21.180621 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:21 crc kubenswrapper[4932]: E0218 19:35:21.180762 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.235532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.235605 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.235624 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.235652 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.235672 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.338600 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.338725 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.338746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.338771 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.338793 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.442658 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.442746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.442765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.442791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.442808 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.544979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.545023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.545038 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.545060 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.545076 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.647915 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.647954 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.647965 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.647981 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.647992 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.750560 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.750621 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.750637 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.750659 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.750676 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.853588 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.853650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.853669 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.853693 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.853711 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.956852 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.956912 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.956930 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.956953 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.956970 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.059584 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.059642 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.059659 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.059683 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.059700 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.162736 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.162817 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.162839 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.162863 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.162880 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.178331 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:56:22.004214937 +0000 UTC Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.178527 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:22 crc kubenswrapper[4932]: E0218 19:35:22.178709 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.265470 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.265539 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.265566 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.265601 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.265627 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.369201 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.369316 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.369351 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.369383 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.369408 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.472619 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.472663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.472672 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.472687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.472696 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.576367 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.576463 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.576489 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.576520 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.576543 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.680987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.681049 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.681066 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.681090 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.681108 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.784501 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.784557 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.784578 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.784603 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.784624 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.891987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.892049 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.892068 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.892094 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.892112 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.995923 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.995995 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.996014 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.996038 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.996056 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.098664 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.098705 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.098715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.098752 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.098767 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.178467 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:22:06.021164955 +0000 UTC Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.178718 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.178774 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.178756 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:23 crc kubenswrapper[4932]: E0218 19:35:23.178929 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:23 crc kubenswrapper[4932]: E0218 19:35:23.179023 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:23 crc kubenswrapper[4932]: E0218 19:35:23.179133 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.200994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.201052 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.201071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.201095 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.201114 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.304405 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.304469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.304487 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.304511 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.304529 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.407363 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.407438 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.407456 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.407485 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.407504 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.510493 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.510533 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.510542 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.510559 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.510569 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.613762 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.613812 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.613829 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.613851 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.613867 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.717074 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.717141 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.717163 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.717227 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.717256 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.820246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.820328 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.820350 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.820391 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.820422 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.923526 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.923674 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.923700 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.923735 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.923758 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.938538 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.938579 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.938604 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.938628 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.938643 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: E0218 19:35:23.959525 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.964742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.964802 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.964819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.964844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.964864 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: E0218 19:35:23.983271 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.988489 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.988560 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.988584 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.988616 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.988642 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: E0218 19:35:24.008209 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.013336 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.013374 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.013390 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.013411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.013427 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: E0218 19:35:24.028588 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.032921 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.032962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.032974 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.032992 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.033003 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: E0218 19:35:24.051895 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:24 crc kubenswrapper[4932]: E0218 19:35:24.052034 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.054001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.054032 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.054043 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.054057 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.054068 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.157409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.157490 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.157516 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.157551 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.157576 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.178939 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:43:20.077823217 +0000 UTC Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.179042 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:24 crc kubenswrapper[4932]: E0218 19:35:24.179261 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.261223 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.261307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.261334 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.261369 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.261395 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.363993 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.364064 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.364084 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.364109 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.364127 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.468362 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.468761 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.468778 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.468802 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.468819 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.572200 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.572283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.572304 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.572331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.572349 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.675672 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.675732 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.675744 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.675761 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.675772 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.779622 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.779714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.779750 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.779779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.779802 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.883408 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.883469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.883492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.883521 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.883541 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.986906 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.986979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.987016 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.987050 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.987074 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.090567 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.090633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.090650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.090675 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.090696 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.178800 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.178903 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:25 crc kubenswrapper[4932]: E0218 19:35:25.179004 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.178830 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.179097 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:55:38.272900807 +0000 UTC Feb 18 19:35:25 crc kubenswrapper[4932]: E0218 19:35:25.179245 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:25 crc kubenswrapper[4932]: E0218 19:35:25.179443 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.193422 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.193464 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.193480 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.193503 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.193520 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.297319 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.297414 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.297431 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.297460 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.297479 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.400553 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.400647 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.400665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.400689 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.400709 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.503663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.503742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.503761 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.503785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.503804 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.606870 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.606945 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.606962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.606987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.607005 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.710023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.710081 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.710103 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.710126 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.710143 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.812661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.812703 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.812715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.812731 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.812743 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.915517 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.915586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.915605 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.915632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.915651 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.018507 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.018599 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.018627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.018657 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.018681 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.121426 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.121460 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.121473 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.121488 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.121500 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.178110 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:26 crc kubenswrapper[4932]: E0218 19:35:26.178324 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.180247 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:43:33.627133025 +0000 UTC Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.224920 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.224986 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.225003 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.225024 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.225043 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.327882 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.327928 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.327945 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.327970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.327986 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.430591 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.430667 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.430684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.430707 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.430728 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.534011 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.534079 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.534102 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.534135 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.534155 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.637402 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.637453 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.637469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.637492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.637509 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.739714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.739756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.739768 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.739793 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.739818 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.842220 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.842286 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.842303 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.842329 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.842346 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.945811 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.945967 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.946002 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.946042 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.946069 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.049352 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.049402 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.049423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.049470 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.049495 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.152862 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.152926 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.152951 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.152979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.153001 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.183581 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:27 crc kubenswrapper[4932]: E0218 19:35:27.183837 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.183901 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:02:38.811386878 +0000 UTC Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.184091 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:27 crc kubenswrapper[4932]: E0218 19:35:27.184311 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.184494 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:27 crc kubenswrapper[4932]: E0218 19:35:27.184702 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.202464 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.219345 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6b1ceeb-ed25-4345-a294-674238130833\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd80983bc05658f4dacedf042b5c669290255dd503bccbc9164ad48e35e7d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.240849 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.256448 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.256506 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.256528 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.256558 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.256579 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.272019 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:17Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0218 19:35:17.178478 6969 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178611 6969 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178682 6969 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:35:17.178776 6969 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178928 6969 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.179579 6969 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:35:17.179590 6969 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:35:17.179618 6969 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:35:17.179632 6969 factory.go:656] Stopping watch factory\\\\nI0218 19:35:17.179640 6969 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:35:17.179653 6969 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:35:17.179681 6969 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:35:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.291769 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.307454 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.321208 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.336499 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.347009 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.356737 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.358948 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.359019 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.359034 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.359052 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.359068 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.369523 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.382680 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.392269 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.403513 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.418349 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.439905 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.458421 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.462210 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.462273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.462292 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.462317 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.462334 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.478863 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.564751 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.564815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.564839 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.564889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.564916 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.667049 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.667107 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.667125 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.667149 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.667167 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.771389 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.771441 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.771458 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.771482 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.771502 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.874153 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.874234 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.874251 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.874273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.874290 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.977371 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.977423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.977443 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.977467 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.977483 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.080716 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.081585 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.081778 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.081922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.082074 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.178442 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:28 crc kubenswrapper[4932]: E0218 19:35:28.178799 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.184243 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:14:02.363350346 +0000 UTC Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.185333 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.185426 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.185444 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.185468 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.185486 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.288461 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.288798 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.289013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.289216 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.289376 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.392394 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.392457 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.392473 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.392500 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.392517 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.495958 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.496047 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.496071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.496095 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.496112 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.599860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.599923 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.599941 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.599968 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.599986 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.703004 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.703074 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.703101 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.703130 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.703152 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.806293 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.806351 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.806374 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.806401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.806425 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.909786 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.909845 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.909861 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.909884 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.909902 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.012975 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.013071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.013097 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.013128 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.013154 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.115455 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.115499 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.115516 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.115541 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.115558 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.178822 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.178904 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:29 crc kubenswrapper[4932]: E0218 19:35:29.179007 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:29 crc kubenswrapper[4932]: E0218 19:35:29.179111 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.179276 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:29 crc kubenswrapper[4932]: E0218 19:35:29.179638 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.184456 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:43:06.355076513 +0000 UTC Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.218593 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.218640 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.218660 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.218684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.218702 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.322989 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.323063 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.323083 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.323112 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.323137 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.427115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.427257 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.427282 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.427318 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.427342 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.530320 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.530562 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.530731 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.530860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.530985 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.634922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.635396 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.635538 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.635665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.635805 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.739892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.739973 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.739996 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.740026 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.740050 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.843208 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.843266 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.843335 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.843361 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.843379 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.946835 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.946897 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.946916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.946942 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.946962 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.049629 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.049704 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.049721 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.049745 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.049762 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.153029 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.153089 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.153105 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.153144 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.153161 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.178619 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:30 crc kubenswrapper[4932]: E0218 19:35:30.178815 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.185216 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:20:25.082154191 +0000 UTC Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.256665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.256741 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.256765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.256812 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.256837 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.361025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.361093 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.361115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.361146 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.361166 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.464465 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.464536 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.464554 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.464578 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.464598 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.567262 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.567325 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.567343 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.567365 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.567382 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.670255 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.670330 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.670355 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.670386 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.670405 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.773602 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.773654 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.773671 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.773693 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.773709 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.876911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.876982 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.877009 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.877037 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.877058 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.980322 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.980384 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.980400 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.980424 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.980442 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.083451 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.083603 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.083627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.083651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.083668 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.179380 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.179508 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:31 crc kubenswrapper[4932]: E0218 19:35:31.179581 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:31 crc kubenswrapper[4932]: E0218 19:35:31.179678 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.179768 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:31 crc kubenswrapper[4932]: E0218 19:35:31.179862 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.185355 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 01:24:39.065591145 +0000 UTC Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.186987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.187082 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.187108 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.187136 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.187160 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.289847 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.289893 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.289910 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.289934 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.289951 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.393082 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.393141 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.393159 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.393219 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.393237 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.496666 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.496718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.496729 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.496746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.496757 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.599848 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.599905 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.599922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.599944 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.599961 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.702727 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.702786 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.702805 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.702829 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.702847 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.806537 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.806663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.806684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.806708 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.806724 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.909590 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.909644 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.909660 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.909682 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.909698 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.012521 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.012584 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.012602 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.012626 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.012646 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.115947 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.116004 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.116025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.116054 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.116076 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.178687 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:32 crc kubenswrapper[4932]: E0218 19:35:32.178863 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.185864 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:01:17.755211537 +0000 UTC Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.218537 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.218585 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.218618 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.218657 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.218681 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.321751 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.321808 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.321822 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.321842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.321854 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.424445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.424482 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.424494 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.424509 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.424519 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.526687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.526808 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.526838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.526850 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.526859 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.630040 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.630104 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.630126 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.630153 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.630245 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.732709 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.732760 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.732773 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.732792 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.732806 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.835481 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.835515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.835525 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.835541 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.835552 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.938391 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.938444 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.938460 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.938482 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.938498 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.041663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.041727 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.041743 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.041766 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.041783 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.145091 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.145177 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.145251 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.145289 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.145309 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.178951 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.179056 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:33 crc kubenswrapper[4932]: E0218 19:35:33.179137 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.179160 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:33 crc kubenswrapper[4932]: E0218 19:35:33.179401 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:33 crc kubenswrapper[4932]: E0218 19:35:33.180403 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.180503 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:35:33 crc kubenswrapper[4932]: E0218 19:35:33.181413 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.186123 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:16:06.083773546 +0000 UTC Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.210447 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.248513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.248576 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.248596 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.248620 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.248637 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.352884 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.352941 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.352958 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.352981 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.353000 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.455668 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.455723 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.455742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.455766 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.455782 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.558853 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.558920 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.558938 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.559062 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.559084 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.662162 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.662366 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.662398 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.662429 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.662451 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.765815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.765878 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.765901 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.765930 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.765953 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.868828 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.868900 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.868922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.868949 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.868973 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.972018 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.972090 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.972115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.972146 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.972170 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.077619 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.077709 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.077734 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.077765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.077790 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.178919 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:34 crc kubenswrapper[4932]: E0218 19:35:34.179414 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.181513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.181586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.181611 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.181641 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.181663 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.186681 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:30:14.589192408 +0000 UTC Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.284723 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.284763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.284775 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.284793 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.284807 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.364338 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.364389 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.364405 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.364433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.364470 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: E0218 19:35:34.384803 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.390332 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.390396 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.390413 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.390436 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.390454 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: E0218 19:35:34.412140 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.417337 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.417384 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.417416 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.417439 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.417457 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: E0218 19:35:34.438629 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.443470 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.443525 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.443548 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.443574 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.443594 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: E0218 19:35:34.465388 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.470721 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.470778 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.470800 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.470823 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.470841 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: E0218 19:35:34.490917 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:34 crc kubenswrapper[4932]: E0218 19:35:34.491220 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.493449 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.493488 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.493497 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.493511 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.493524 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.596560 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.596607 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.596619 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.596634 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.596646 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.699791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.699837 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.699849 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.699868 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.699880 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.803297 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.803369 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.803390 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.803424 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.803448 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.906452 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.906521 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.906547 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.906574 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.906593 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.009561 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.009638 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.009660 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.009688 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.009709 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.113488 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.113565 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.113588 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.113618 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.113645 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.178691 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.178830 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.178703 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:35 crc kubenswrapper[4932]: E0218 19:35:35.178957 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:35 crc kubenswrapper[4932]: E0218 19:35:35.179047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:35 crc kubenswrapper[4932]: E0218 19:35:35.179246 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.187604 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 02:57:56.834095176 +0000 UTC Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.217303 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.217400 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.217421 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.217444 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.217462 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.320062 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.320123 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.320140 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.320163 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.320212 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.422940 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.422994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.423009 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.423033 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.423081 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.525925 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.526003 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.526020 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.526043 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.526064 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.629523 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.629564 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.629575 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.629591 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.629603 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.732900 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.732943 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.732956 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.732973 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.732984 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.836327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.836390 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.836409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.836432 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.836449 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.939304 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.939348 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.939363 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.939379 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.939390 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.042770 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.042829 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.042844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.042865 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.042876 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.146057 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.146106 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.146128 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.146158 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.146224 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.179085 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:36 crc kubenswrapper[4932]: E0218 19:35:36.179274 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.188241 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:33:01.55861455 +0000 UTC Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.249324 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.249379 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.249395 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.249423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.249446 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.352557 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.352616 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.352633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.352658 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.352674 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.455771 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.455822 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.455834 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.455858 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.455883 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.558262 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.558316 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.558334 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.558358 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.558374 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.661607 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.661681 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.661703 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.661739 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.661762 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.765321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.765413 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.765433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.765458 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.765476 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.868327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.868391 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.868408 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.868432 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.868450 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.971741 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.971813 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.971831 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.971859 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.971877 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.075553 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.075618 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.075640 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.075671 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.075691 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.178203 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:37 crc kubenswrapper[4932]: E0218 19:35:37.178389 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.178217 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:37 crc kubenswrapper[4932]: E0218 19:35:37.178771 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.179046 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.179120 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.179138 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.179522 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.179741 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.180337 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:37 crc kubenswrapper[4932]: E0218 19:35:37.180686 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.189634 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:07:59.219936142 +0000 UTC Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.206058 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.237963 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:17Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0218 19:35:17.178478 6969 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178611 6969 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178682 6969 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:35:17.178776 6969 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178928 6969 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.179579 6969 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:35:17.179590 6969 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:35:17.179618 6969 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:35:17.179632 6969 factory.go:656] Stopping watch factory\\\\nI0218 19:35:17.179640 6969 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:35:17.179653 6969 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:35:17.179681 6969 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:35:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.253245 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.267993 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.285883 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6b1ceeb-ed25-4345-a294-674238130833\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd80983bc05658f4dacedf042b5c669290255dd503bccbc9164ad48e35e7d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.286218 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.286293 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.286317 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.286353 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.286377 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.305720 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.324739 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.337527 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.362831 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcce3828-1fe2-412c-85ca-8f2823938570\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23ec1738bcf1d2dd647db0d373af934b11154dff53044ff834fe7257b32f17d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfac026c12a8bf498c2bb79250930f207d0064ffd0edbef2e5e24cfa93a62971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d6e02f2655abbd491fd87d24365a4cd72db2765eb2c05c5553febfa7be962a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb79fa7b7288b35bb4bcd652d79107019527c1171639893fef92b89d26303412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cb57625b5582e558b55c361284729fc2052214bf528e7458937568887515e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa20d1549113cb3aad03ce60838085f0ad49599a6fb652c6818377b9baee6edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa20d1549113cb3aad03ce60838085f0ad49599a6fb652c6818377b9baee6edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676f220fbabbdf1f764a0ac9856dcfc9d8f6543b96228f378b3bd30c8ab34986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676f220fbabbdf1f764a0ac9856dcfc9d8f6543b96228f378b3bd30c8ab34986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://05c608056602e82f1d72f327241bccbf4b1a4f33f9e8512cbf3c44689c7e7ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05c608056602e82f1d72f327241bccbf4b1a4f33f9e8512cbf3c44689c7e7ec0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.375752 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.386906 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.391138 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.391283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.391305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.391522 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.391539 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.404686 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.425667 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.438882 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.454110 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.472841 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.491473 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.493502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.493591 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.493610 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.493669 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.493860 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.510406 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.527752 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.596374 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.596411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.596420 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.596433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.596443 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.699426 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.699464 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.699477 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.699492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.699505 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.801490 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.801544 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.801563 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.801587 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.801605 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.904334 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.904409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.904432 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.904467 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.904491 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.010908 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.010986 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.011023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.011073 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.011098 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.113626 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.113679 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.113697 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.113722 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.113739 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.178729 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:38 crc kubenswrapper[4932]: E0218 19:35:38.179307 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.190428 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:41:52.620619613 +0000 UTC Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.216940 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.217009 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.217030 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.217074 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.217097 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.321229 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.321295 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.321315 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.321345 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.321366 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.424240 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.424296 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.424314 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.424391 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.424409 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.527407 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.527449 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.527459 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.527476 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.527489 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.630687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.630763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.630785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.630817 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.630839 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.734236 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.734280 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.734290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.734305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.734316 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.836960 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.837022 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.837040 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.837068 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.837086 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.939841 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.939909 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.939931 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.939960 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.939979 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.041959 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.042014 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.042030 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.042054 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.042071 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.145016 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.145071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.145088 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.145110 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.145209 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.178165 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.178339 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.178340 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:39 crc kubenswrapper[4932]: E0218 19:35:39.178493 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:35:39 crc kubenswrapper[4932]: E0218 19:35:39.178547 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.178590 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:39 crc kubenswrapper[4932]: E0218 19:35:39.178691 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:39 crc kubenswrapper[4932]: E0218 19:35:39.178777 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:39 crc kubenswrapper[4932]: E0218 19:35:39.179339 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:36:43.179070369 +0000 UTC m=+166.761025244 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.190605 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:23:21.46863921 +0000 UTC Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.247413 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.247481 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.247503 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.247535 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.247558 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.351537 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.351589 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.351612 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.351641 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.351665 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.454011 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.454073 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.454091 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.454117 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.454141 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.556633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.556715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.556740 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.556768 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.556786 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.659389 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.659461 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.659483 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.659511 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.659532 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.762803 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.762845 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.762859 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.762877 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.762889 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.866331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.866395 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.866423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.866454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.866477 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.968673 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.968740 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.968759 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.968784 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.968800 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.071003 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.071082 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.071103 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.071131 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.071153 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.173445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.173491 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.173502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.173517 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.173528 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.178983 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:40 crc kubenswrapper[4932]: E0218 19:35:40.179155 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.191421 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 10:22:25.067472898 +0000 UTC Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.276126 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.276193 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.276207 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.276226 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.276238 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.378713 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.378775 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.378792 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.378815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.378833 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.481677 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.481726 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.481742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.481764 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.481783 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.585000 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.585075 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.585085 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.585103 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.585115 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.688359 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.688424 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.688443 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.688467 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.688483 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.791863 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.791938 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.791958 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.791988 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.792009 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.895003 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.895059 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.895075 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.895097 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.895114 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.997869 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.997928 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.997944 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.997968 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.997985 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.100142 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.100222 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.100240 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.100263 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.100280 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.178260 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.178534 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:41 crc kubenswrapper[4932]: E0218 19:35:41.178530 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.178927 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:41 crc kubenswrapper[4932]: E0218 19:35:41.179010 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:41 crc kubenswrapper[4932]: E0218 19:35:41.179128 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.191684 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 08:53:52.217091186 +0000 UTC Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.203511 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.203569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.203584 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.203603 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.203614 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.305866 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.305920 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.305930 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.305948 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.305960 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.409382 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.409457 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.409476 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.409503 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.409522 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.513203 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.513249 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.513260 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.513279 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.513292 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.615938 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.615993 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.616007 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.616027 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.616042 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.718735 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.718779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.718816 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.718833 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.718846 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.821724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.821763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.821773 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.821790 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.821803 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.924673 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.924742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.924766 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.924798 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.924820 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.027382 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.027447 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.027470 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.027499 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.027520 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.130672 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.130734 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.130756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.130785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.130807 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.179048 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:42 crc kubenswrapper[4932]: E0218 19:35:42.179317 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.192227 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:01:49.515726723 +0000 UTC Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.233851 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.233917 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.233940 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.233969 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.233994 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.337013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.337075 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.337091 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.337114 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.337135 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.439964 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.440033 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.440056 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.440084 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.440112 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.543544 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.543614 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.543637 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.543664 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.543687 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.646532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.646580 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.646590 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.646618 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.646628 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.748559 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.748609 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.748626 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.748647 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.748665 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.852309 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.852407 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.852445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.852475 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.852499 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.954249 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.954288 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.954299 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.954315 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.954326 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.057268 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.057324 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.057343 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.057366 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.057383 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.160163 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.160558 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.160707 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.160843 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.160963 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.178954 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.178988 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.179080 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:43 crc kubenswrapper[4932]: E0218 19:35:43.179741 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:43 crc kubenswrapper[4932]: E0218 19:35:43.179844 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:43 crc kubenswrapper[4932]: E0218 19:35:43.179955 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.192594 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 13:14:31.591868248 +0000 UTC Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.264323 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.264388 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.264407 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.264433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.264452 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.367596 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.367635 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.367646 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.367663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.367673 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.469935 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.469972 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.469979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.469994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.470004 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.572224 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.572265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.572273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.572287 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.572297 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.675667 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.675717 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.675733 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.675756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.675771 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.778802 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.778874 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.778892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.778914 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.778935 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.880988 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.881072 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.881097 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.881553 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.881875 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.984244 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.984276 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.984287 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.984301 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.984311 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.086945 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.086994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.087006 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.087031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.087045 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.178448 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:44 crc kubenswrapper[4932]: E0218 19:35:44.178611 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.189087 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.189129 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.189141 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.189157 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.189185 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.193657 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:41:17.243690407 +0000 UTC Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.291586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.291625 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.291633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.291648 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.291659 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.393246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.393284 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.393296 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.393308 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.393318 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.496166 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.496243 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.496257 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.496300 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.496316 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.599808 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.599877 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.599905 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.599930 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.599949 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.704076 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.704129 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.704141 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.704161 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.704191 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.807069 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.807477 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.807631 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.807826 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.807980 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.866620 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.867157 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.867414 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.867626 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.867821 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.933890 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp"] Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.934460 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.939168 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.939540 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.939749 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.943087 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.981569 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=84.981545378 podStartE2EDuration="1m24.981545378s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:44.981502967 +0000 UTC m=+108.563457902" watchObservedRunningTime="2026-02-18 19:35:44.981545378 +0000 UTC m=+108.563500233" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.981736 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" podStartSLOduration=84.981731772 podStartE2EDuration="1m24.981731772s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:44.961600893 +0000 UTC m=+108.543555768" watchObservedRunningTime="2026-02-18 19:35:44.981731772 +0000 UTC m=+108.563686617" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.013239 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jmmxw" podStartSLOduration=86.013213974 podStartE2EDuration="1m26.013213974s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.013094171 +0000 UTC m=+108.595049076" watchObservedRunningTime="2026-02-18 19:35:45.013213974 +0000 UTC m=+108.595168869" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.045459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.045617 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.045665 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.045870 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.045959 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.059701 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podStartSLOduration=86.05967953 podStartE2EDuration="1m26.05967953s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.029434875 +0000 UTC m=+108.611389770" watchObservedRunningTime="2026-02-18 19:35:45.05967953 +0000 UTC m=+108.641634415" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.060253 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" podStartSLOduration=85.060241322 podStartE2EDuration="1m25.060241322s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.059909685 +0000 UTC m=+108.641864560" watchObservedRunningTime="2026-02-18 19:35:45.060241322 +0000 UTC m=+108.642196207" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.079327 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.079297067 podStartE2EDuration="1m28.079297067s" podCreationTimestamp="2026-02-18 19:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.079114333 +0000 UTC m=+108.661069218" watchObservedRunningTime="2026-02-18 19:35:45.079297067 +0000 UTC m=+108.661251942" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.121562 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-sj8bg" podStartSLOduration=85.121529518 podStartE2EDuration="1m25.121529518s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.121033967 +0000 UTC m=+108.702988852" watchObservedRunningTime="2026-02-18 19:35:45.121529518 +0000 UTC m=+108.703484413" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.147133 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.147241 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.147260 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.147276 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.147296 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.147321 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.147364 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.149665 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.153525 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.167851 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.16783527 podStartE2EDuration="58.16783527s" podCreationTimestamp="2026-02-18 19:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.15346549 +0000 UTC m=+108.735420375" watchObservedRunningTime="2026-02-18 19:35:45.16783527 +0000 UTC m=+108.749790125" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.177409 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.179148 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.179204 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.179434 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:45 crc kubenswrapper[4932]: E0218 19:35:45.179615 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:45 crc kubenswrapper[4932]: E0218 19:35:45.179741 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:45 crc kubenswrapper[4932]: E0218 19:35:45.179814 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.185686 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=32.185672348 podStartE2EDuration="32.185672348s" podCreationTimestamp="2026-02-18 19:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.168801882 +0000 UTC m=+108.750756727" watchObservedRunningTime="2026-02-18 19:35:45.185672348 +0000 UTC m=+108.767627203" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.194848 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:52:04.88344988 +0000 UTC Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.195593 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.201886 4932 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.260035 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.266565 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=12.266541691 podStartE2EDuration="12.266541691s" podCreationTimestamp="2026-02-18 19:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.263070353 +0000 UTC m=+108.845025238" watchObservedRunningTime="2026-02-18 19:35:45.266541691 +0000 UTC m=+108.848496566" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.889153 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" event={"ID":"e98127f1-d583-4f3a-bb5b-efd0b4d6b367","Type":"ContainerStarted","Data":"32c042b4c6ab5823a2643c599e7abae1d79bc6409dcebdafa2661577d133b350"} Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.889528 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" event={"ID":"e98127f1-d583-4f3a-bb5b-efd0b4d6b367","Type":"ContainerStarted","Data":"980dfca20ba2887bc027bdc5ecf95018bdbab1469583ed04a2db15b6eeef5b93"} Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.908265 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bz9kj" podStartSLOduration=86.908238495 podStartE2EDuration="1m26.908238495s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.3243692 +0000 UTC m=+108.906324045" watchObservedRunningTime="2026-02-18 19:35:45.908238495 +0000 UTC m=+109.490193350" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.908877 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" podStartSLOduration=85.908864619 podStartE2EDuration="1m25.908864619s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.907943228 +0000 UTC m=+109.489898113" watchObservedRunningTime="2026-02-18 19:35:45.908864619 +0000 UTC m=+109.490819474" Feb 18 19:35:46 crc kubenswrapper[4932]: I0218 19:35:46.179007 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:46 crc kubenswrapper[4932]: E0218 19:35:46.179164 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:47 crc kubenswrapper[4932]: I0218 19:35:47.314907 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:47 crc kubenswrapper[4932]: I0218 19:35:47.314927 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:47 crc kubenswrapper[4932]: E0218 19:35:47.315857 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:47 crc kubenswrapper[4932]: I0218 19:35:47.315891 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:47 crc kubenswrapper[4932]: E0218 19:35:47.315984 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:47 crc kubenswrapper[4932]: E0218 19:35:47.316056 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:48 crc kubenswrapper[4932]: I0218 19:35:48.178212 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:48 crc kubenswrapper[4932]: E0218 19:35:48.178502 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:48 crc kubenswrapper[4932]: I0218 19:35:48.179374 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:35:48 crc kubenswrapper[4932]: E0218 19:35:48.179600 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:35:49 crc kubenswrapper[4932]: I0218 19:35:49.178432 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:49 crc kubenswrapper[4932]: I0218 19:35:49.178547 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:49 crc kubenswrapper[4932]: E0218 19:35:49.178712 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:49 crc kubenswrapper[4932]: I0218 19:35:49.178981 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:49 crc kubenswrapper[4932]: E0218 19:35:49.179082 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:49 crc kubenswrapper[4932]: E0218 19:35:49.179323 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:50 crc kubenswrapper[4932]: I0218 19:35:50.179106 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:50 crc kubenswrapper[4932]: E0218 19:35:50.179324 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:51 crc kubenswrapper[4932]: I0218 19:35:51.178795 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:51 crc kubenswrapper[4932]: I0218 19:35:51.178821 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:51 crc kubenswrapper[4932]: I0218 19:35:51.178848 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:51 crc kubenswrapper[4932]: E0218 19:35:51.178911 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:51 crc kubenswrapper[4932]: E0218 19:35:51.179050 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:51 crc kubenswrapper[4932]: E0218 19:35:51.179089 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:52 crc kubenswrapper[4932]: I0218 19:35:52.178301 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:52 crc kubenswrapper[4932]: E0218 19:35:52.178446 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:53 crc kubenswrapper[4932]: I0218 19:35:53.179247 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:53 crc kubenswrapper[4932]: I0218 19:35:53.179316 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:53 crc kubenswrapper[4932]: I0218 19:35:53.179271 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:53 crc kubenswrapper[4932]: E0218 19:35:53.179426 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:53 crc kubenswrapper[4932]: E0218 19:35:53.179523 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:53 crc kubenswrapper[4932]: E0218 19:35:53.179623 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:54 crc kubenswrapper[4932]: I0218 19:35:54.178902 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:54 crc kubenswrapper[4932]: E0218 19:35:54.179122 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:54 crc kubenswrapper[4932]: I0218 19:35:54.926646 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/1.log" Feb 18 19:35:54 crc kubenswrapper[4932]: I0218 19:35:54.927379 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/0.log" Feb 18 19:35:54 crc kubenswrapper[4932]: I0218 19:35:54.927455 4932 generic.go:334] "Generic (PLEG): container finished" podID="1b8d80e2-307e-43b6-9003-e77eef51e084" containerID="3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda" exitCode=1 Feb 18 19:35:54 crc kubenswrapper[4932]: I0218 19:35:54.927527 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerDied","Data":"3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda"} Feb 18 19:35:54 crc kubenswrapper[4932]: I0218 19:35:54.927598 4932 scope.go:117] "RemoveContainer" containerID="e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7" Feb 18 19:35:54 crc kubenswrapper[4932]: I0218 19:35:54.928347 4932 scope.go:117] "RemoveContainer" containerID="3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda" Feb 18 19:35:54 crc kubenswrapper[4932]: E0218 19:35:54.928643 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-sj8bg_openshift-multus(1b8d80e2-307e-43b6-9003-e77eef51e084)\"" pod="openshift-multus/multus-sj8bg" podUID="1b8d80e2-307e-43b6-9003-e77eef51e084" Feb 18 19:35:55 crc kubenswrapper[4932]: I0218 19:35:55.179342 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:55 crc kubenswrapper[4932]: I0218 19:35:55.179356 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:55 crc kubenswrapper[4932]: E0218 19:35:55.179541 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:55 crc kubenswrapper[4932]: E0218 19:35:55.179685 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:55 crc kubenswrapper[4932]: I0218 19:35:55.180044 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:55 crc kubenswrapper[4932]: E0218 19:35:55.180432 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:55 crc kubenswrapper[4932]: I0218 19:35:55.932966 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/1.log" Feb 18 19:35:56 crc kubenswrapper[4932]: I0218 19:35:56.178804 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:56 crc kubenswrapper[4932]: E0218 19:35:56.179039 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:57 crc kubenswrapper[4932]: E0218 19:35:57.130123 4932 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 18 19:35:57 crc kubenswrapper[4932]: I0218 19:35:57.179130 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:57 crc kubenswrapper[4932]: I0218 19:35:57.179275 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:57 crc kubenswrapper[4932]: E0218 19:35:57.181990 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:57 crc kubenswrapper[4932]: I0218 19:35:57.182021 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:57 crc kubenswrapper[4932]: E0218 19:35:57.182244 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:57 crc kubenswrapper[4932]: E0218 19:35:57.182414 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:57 crc kubenswrapper[4932]: E0218 19:35:57.317024 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:35:58 crc kubenswrapper[4932]: I0218 19:35:58.178247 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:58 crc kubenswrapper[4932]: E0218 19:35:58.178412 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:59 crc kubenswrapper[4932]: I0218 19:35:59.178933 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:59 crc kubenswrapper[4932]: I0218 19:35:59.179051 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:59 crc kubenswrapper[4932]: E0218 19:35:59.179096 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:59 crc kubenswrapper[4932]: I0218 19:35:59.179135 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:59 crc kubenswrapper[4932]: E0218 19:35:59.179240 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:59 crc kubenswrapper[4932]: E0218 19:35:59.179395 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:36:00 crc kubenswrapper[4932]: I0218 19:36:00.179103 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:00 crc kubenswrapper[4932]: E0218 19:36:00.179346 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:36:01 crc kubenswrapper[4932]: I0218 19:36:01.180101 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:01 crc kubenswrapper[4932]: I0218 19:36:01.180250 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:01 crc kubenswrapper[4932]: I0218 19:36:01.180112 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:01 crc kubenswrapper[4932]: E0218 19:36:01.180372 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:36:01 crc kubenswrapper[4932]: E0218 19:36:01.180514 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:36:01 crc kubenswrapper[4932]: E0218 19:36:01.180648 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:36:02 crc kubenswrapper[4932]: I0218 19:36:02.178293 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:02 crc kubenswrapper[4932]: E0218 19:36:02.178517 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:36:02 crc kubenswrapper[4932]: E0218 19:36:02.318917 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:36:03 crc kubenswrapper[4932]: I0218 19:36:03.178290 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:03 crc kubenswrapper[4932]: E0218 19:36:03.178465 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:36:03 crc kubenswrapper[4932]: I0218 19:36:03.178807 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:03 crc kubenswrapper[4932]: I0218 19:36:03.178962 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:03 crc kubenswrapper[4932]: E0218 19:36:03.179091 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:36:03 crc kubenswrapper[4932]: I0218 19:36:03.179218 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:36:03 crc kubenswrapper[4932]: E0218 19:36:03.179312 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:36:03 crc kubenswrapper[4932]: I0218 19:36:03.962806 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/3.log" Feb 18 19:36:03 crc kubenswrapper[4932]: I0218 19:36:03.965102 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} Feb 18 19:36:03 crc kubenswrapper[4932]: I0218 19:36:03.965551 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:36:04 crc kubenswrapper[4932]: I0218 19:36:04.152380 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podStartSLOduration=104.152354653 podStartE2EDuration="1m44.152354653s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:03.99162092 +0000 UTC m=+127.573575765" watchObservedRunningTime="2026-02-18 19:36:04.152354653 +0000 UTC m=+127.734309518" Feb 18 19:36:04 crc kubenswrapper[4932]: I0218 19:36:04.153718 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kdjbt"] Feb 18 19:36:04 crc kubenswrapper[4932]: I0218 19:36:04.153810 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:04 crc kubenswrapper[4932]: E0218 19:36:04.153913 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:36:04 crc kubenswrapper[4932]: I0218 19:36:04.178387 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:04 crc kubenswrapper[4932]: E0218 19:36:04.178510 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:36:05 crc kubenswrapper[4932]: I0218 19:36:05.179394 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:05 crc kubenswrapper[4932]: E0218 19:36:05.179526 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:36:05 crc kubenswrapper[4932]: I0218 19:36:05.179587 4932 scope.go:117] "RemoveContainer" containerID="3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda" Feb 18 19:36:05 crc kubenswrapper[4932]: I0218 19:36:05.179678 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:05 crc kubenswrapper[4932]: I0218 19:36:05.179701 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:05 crc kubenswrapper[4932]: E0218 19:36:05.179915 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:36:05 crc kubenswrapper[4932]: E0218 19:36:05.180035 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:36:05 crc kubenswrapper[4932]: I0218 19:36:05.975545 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/1.log" Feb 18 19:36:05 crc kubenswrapper[4932]: I0218 19:36:05.975968 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerStarted","Data":"abaae01c3d1488753c134b713c5ac61b4207745b6a2dc1624d7639c5e6d2387b"} Feb 18 19:36:06 crc kubenswrapper[4932]: I0218 19:36:06.178999 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:06 crc kubenswrapper[4932]: E0218 19:36:06.179141 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:36:07 crc kubenswrapper[4932]: I0218 19:36:07.178613 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:07 crc kubenswrapper[4932]: I0218 19:36:07.178613 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:07 crc kubenswrapper[4932]: E0218 19:36:07.181492 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:36:07 crc kubenswrapper[4932]: I0218 19:36:07.181561 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:07 crc kubenswrapper[4932]: E0218 19:36:07.181684 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:36:07 crc kubenswrapper[4932]: E0218 19:36:07.181798 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:36:08 crc kubenswrapper[4932]: I0218 19:36:08.178824 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:08 crc kubenswrapper[4932]: I0218 19:36:08.181993 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 19:36:08 crc kubenswrapper[4932]: I0218 19:36:08.182234 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 19:36:09 crc kubenswrapper[4932]: I0218 19:36:09.178252 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:09 crc kubenswrapper[4932]: I0218 19:36:09.178299 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:09 crc kubenswrapper[4932]: I0218 19:36:09.178734 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:09 crc kubenswrapper[4932]: I0218 19:36:09.181718 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 19:36:09 crc kubenswrapper[4932]: I0218 19:36:09.181791 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 19:36:09 crc kubenswrapper[4932]: I0218 19:36:09.181968 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 19:36:09 crc kubenswrapper[4932]: I0218 19:36:09.184096 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.430220 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.477573 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-cn2nc"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.478406 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cn2nc" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.481966 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.483398 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.484235 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.485075 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.485475 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.485980 4932 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: secrets "machine-approver-sa-dockercfg-nl2j4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.486443 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-sa-dockercfg-nl2j4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.486250 4932 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.486868 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.487222 4932 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: secrets "machine-approver-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.487269 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.487310 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gkgsj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.487758 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.489091 4932 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: configmaps "machine-approver-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.489146 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-approver-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.489529 4932 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.489582 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.489773 4932 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.489928 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.490097 4932 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.490158 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.492372 4932 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.492431 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.492535 4932 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.492567 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.492661 4932 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.492690 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.492776 4932 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.492808 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.493794 4932 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.493835 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.493952 4932 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.493986 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.500024 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fgjll"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.501105 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.504053 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xnxl9"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.504789 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.511100 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.511395 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.511445 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.511623 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.511707 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.511774 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.512162 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jr49c"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.512685 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.513064 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.514550 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.518591 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.518800 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.519108 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.519200 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.518813 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.520066 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.520237 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.520264 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.520371 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.520441 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.523003 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f874p"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.523630 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.524637 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g2qvz"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.525234 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.525683 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.526359 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pj7mv"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.527033 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.528705 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.529282 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.533257 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.534212 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.538342 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.538727 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.538438 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.538983 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.538510 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.539231 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.539994 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.539246 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.539288 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.539299 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.540818 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.541197 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.541361 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.541507 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.541643 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.541823 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.541959 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.542105 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.542293 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.542437 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.543147 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.543576 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.543864 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.543970 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.544079 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.544262 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.544562 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.544603 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.544682 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.545204 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.545740 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.545890 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.545990 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546099 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546228 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546317 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546438 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hphc8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546494 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546650 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546762 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546856 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546957 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.547070 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.547158 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.547221 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.547940 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.548282 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.548366 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.548412 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.548510 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.557512 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.573474 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.574396 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.578287 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.579316 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.580781 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.581410 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.581637 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlcbj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.581913 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.582636 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.584882 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.585403 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.585578 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.585681 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.585708 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.586081 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.586196 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.586247 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.586336 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.586369 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.586456 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.587382 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.587973 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.588150 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.588673 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.588791 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.589246 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.591148 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cn2nc"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.593082 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.594081 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.596014 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.597294 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.600456 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.611252 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.612838 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.613348 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.623874 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vqskh"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.627575 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-8xrbm"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.632515 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.635010 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.635327 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.635015 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.636154 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bch48"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.636378 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.636867 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.637152 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.637487 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.639285 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.639707 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.640146 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.640475 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.640576 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.641399 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.641699 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.643864 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nzrr6"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.644367 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.645347 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.645659 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.646137 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.649370 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.650487 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.651150 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.652790 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5c79p"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.653565 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.655947 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-845v8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.656644 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.657075 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.657442 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.658706 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.659725 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.660614 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.662199 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jx49r"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.662584 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.663891 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fgjll"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.664974 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.665693 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.666874 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.667990 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.668474 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.669303 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hphc8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.670373 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.671728 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pj7mv"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.672851 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g2qvz"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.674223 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gkgsj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.676127 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bch48"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.676438 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.678319 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682249 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09b62af0-116d-4918-a691-e7040fd7dc22-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682279 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-ca\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682397 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-serving-cert\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682444 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-client\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682463 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09b62af0-116d-4918-a691-e7040fd7dc22-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682535 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l5kp\" (UniqueName: \"kubernetes.io/projected/d75d91b3-7800-4645-b272-768f9d02f81b-kube-api-access-6l5kp\") pod \"downloads-7954f5f757-cn2nc\" (UID: \"d75d91b3-7800-4645-b272-768f9d02f81b\") " pod="openshift-console/downloads-7954f5f757-cn2nc" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682573 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chxzb\" (UniqueName: \"kubernetes.io/projected/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-kube-api-access-chxzb\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682604 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdt6s\" (UniqueName: \"kubernetes.io/projected/09b62af0-116d-4918-a691-e7040fd7dc22-kube-api-access-sdt6s\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682630 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-config\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682643 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-service-ca\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.686939 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.687103 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.689268 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jr49c"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.690491 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.691678 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nqdfv"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.693660 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.693739 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.695501 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.700626 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.700951 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.702896 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.705605 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xnxl9"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.709203 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vqskh"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.713388 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.714672 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.716248 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f874p"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.720250 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nzrr6"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.722391 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.725375 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlcbj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.726227 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.727604 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.729434 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.730627 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.732106 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nqdfv"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.732901 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rmh4d"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.733456 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.734276 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jsz8m"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.734935 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.735111 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.736814 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5c79p"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.737713 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jx49r"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.738736 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.741154 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.741414 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.742672 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.744135 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-845v8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.745159 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rmh4d"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.746445 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dpln6"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.747306 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.747801 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dpln6"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.760747 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.780832 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.782987 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdt6s\" (UniqueName: \"kubernetes.io/projected/09b62af0-116d-4918-a691-e7040fd7dc22-kube-api-access-sdt6s\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783031 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-config\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783049 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-service-ca\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783070 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-ca\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783088 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09b62af0-116d-4918-a691-e7040fd7dc22-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783112 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09b62af0-116d-4918-a691-e7040fd7dc22-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783129 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-serving-cert\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783143 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-client\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783189 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l5kp\" (UniqueName: \"kubernetes.io/projected/d75d91b3-7800-4645-b272-768f9d02f81b-kube-api-access-6l5kp\") pod \"downloads-7954f5f757-cn2nc\" (UID: \"d75d91b3-7800-4645-b272-768f9d02f81b\") " pod="openshift-console/downloads-7954f5f757-cn2nc" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783217 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chxzb\" (UniqueName: \"kubernetes.io/projected/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-kube-api-access-chxzb\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783906 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09b62af0-116d-4918-a691-e7040fd7dc22-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.784093 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-ca\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.784127 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-config\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.784837 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-service-ca\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.788348 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-serving-cert\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.788388 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-client\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.788717 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09b62af0-116d-4918-a691-e7040fd7dc22-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.800702 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.820540 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.880542 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.900328 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.920765 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.941669 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.960849 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.981002 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.000196 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.021389 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.041775 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.061991 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.081407 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.102605 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.121250 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.142433 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.161529 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.182111 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.201732 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.221919 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.241834 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.261878 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.281565 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.312968 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.321311 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.341988 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.361949 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.383007 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.401232 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.420946 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.441401 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.462078 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.482129 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.501167 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.521957 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.542256 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.561954 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.582099 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.601608 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.622039 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.642030 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.660108 4932 request.go:700] Waited for 1.015405723s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&limit=500&resourceVersion=0 Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.662111 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.681246 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.701258 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.722216 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.741793 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.761076 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.780909 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.801380 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.821528 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.841013 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.860902 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.881161 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.902244 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.931796 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.941378 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.962251 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.981612 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.001903 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.020858 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.041217 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.061970 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.081004 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.101428 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.121124 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.141764 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.162146 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.180392 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.201305 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.221433 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.241625 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.261728 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.280622 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.301938 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.321798 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.341529 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.361654 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.381625 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.401073 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.421041 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.441903 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.461231 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.480788 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.501891 4932 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.521533 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.562801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdt6s\" (UniqueName: \"kubernetes.io/projected/09b62af0-116d-4918-a691-e7040fd7dc22-kube-api-access-sdt6s\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.589845 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chxzb\" (UniqueName: \"kubernetes.io/projected/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-kube-api-access-chxzb\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.607800 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd23a7-1b44-440f-be4a-8c236cf8902b-serving-cert\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608053 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-trusted-ca\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608163 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608292 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-etcd-serving-ca\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608392 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-config\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608494 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h82qp\" (UniqueName: \"kubernetes.io/projected/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-kube-api-access-h82qp\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608603 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-encryption-config\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608710 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-serving-cert\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608814 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-config\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608899 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-auth-proxy-config\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609001 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9a7e80fe-b260-461e-a11b-633a14eb304d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609160 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvqqv\" (UniqueName: \"kubernetes.io/projected/c7dff6ec-6703-40fb-a94a-c1d8b4641703-kube-api-access-bvqqv\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609266 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-service-ca\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609298 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-serving-cert\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609332 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-machine-approver-tls\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609364 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mg62\" (UniqueName: \"kubernetes.io/projected/18e44919-11c5-4974-9c71-ff803e668247-kube-api-access-7mg62\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609392 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-dir\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609425 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609456 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-etcd-client\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609485 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609514 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-trusted-ca\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609542 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-policies\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609571 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609614 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609645 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609676 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-config\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609703 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26869f13-c7ee-411c-85a1-72338142184c-audit-dir\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609731 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47777b7a-7599-4366-8e0f-a2ddf382e6ef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609763 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609791 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-audit\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609820 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a7e80fe-b260-461e-a11b-633a14eb304d-serving-cert\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609847 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-audit-policies\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609876 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-config\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609908 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxwvv\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-kube-api-access-kxwvv\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609936 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-client-ca\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609965 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-tls\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609995 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610024 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610053 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610084 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-client-ca\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610163 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18e44919-11c5-4974-9c71-ff803e668247-serving-cert\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610274 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610478 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-etcd-client\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610644 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbq9\" (UniqueName: \"kubernetes.io/projected/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-kube-api-access-bfbq9\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610714 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-config\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610797 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-trusted-ca-bundle\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: E0218 19:36:17.610842 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.110814507 +0000 UTC m=+141.692769492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610952 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx42h\" (UniqueName: \"kubernetes.io/projected/f9f46b79-f300-42de-a2c3-a35670822a3b-kube-api-access-mx42h\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611033 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-certificates\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611070 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611104 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-console-config\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611143 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-config\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611198 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611236 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f446d\" (UniqueName: \"kubernetes.io/projected/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-kube-api-access-f446d\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611275 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-serving-cert\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611308 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-images\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611372 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611403 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm4fr\" (UniqueName: \"kubernetes.io/projected/26869f13-c7ee-411c-85a1-72338142184c-kube-api-access-gm4fr\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611444 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47777b7a-7599-4366-8e0f-a2ddf382e6ef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611475 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611539 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611578 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-config\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611607 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-serving-cert\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611637 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4tqj\" (UniqueName: \"kubernetes.io/projected/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-kube-api-access-t4tqj\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611697 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7dff6ec-6703-40fb-a94a-c1d8b4641703-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611764 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-bound-sa-token\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611809 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-279w9\" (UniqueName: \"kubernetes.io/projected/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-kube-api-access-279w9\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611852 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-oauth-serving-cert\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611904 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-encryption-config\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612042 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-audit-dir\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612099 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c5ca023-fc82-4365-b2f9-f57220013a9f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bldhq\" (UID: \"1c5ca023-fc82-4365-b2f9-f57220013a9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612232 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-serving-cert\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612290 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxl8c\" (UniqueName: \"kubernetes.io/projected/1c5ca023-fc82-4365-b2f9-f57220013a9f-kube-api-access-qxl8c\") pod \"cluster-samples-operator-665b6dd947-bldhq\" (UID: \"1c5ca023-fc82-4365-b2f9-f57220013a9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612353 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dff6ec-6703-40fb-a94a-c1d8b4641703-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612430 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmwdb\" (UniqueName: \"kubernetes.io/projected/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-kube-api-access-kmwdb\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612537 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-node-pullsecrets\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612573 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-service-ca-bundle\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612786 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612874 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2lqf\" (UniqueName: \"kubernetes.io/projected/28fd23a7-1b44-440f-be4a-8c236cf8902b-kube-api-access-b2lqf\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612957 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62pvb\" (UniqueName: \"kubernetes.io/projected/9a7e80fe-b260-461e-a11b-633a14eb304d-kube-api-access-62pvb\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.613059 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.613152 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.613264 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-oauth-config\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.613313 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-image-import-ca\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.613367 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47777b7a-7599-4366-8e0f-a2ddf382e6ef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.613411 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l6wf\" (UniqueName: \"kubernetes.io/projected/47777b7a-7599-4366-8e0f-a2ddf382e6ef-kube-api-access-2l6wf\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.613456 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.615626 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l5kp\" (UniqueName: \"kubernetes.io/projected/d75d91b3-7800-4645-b272-768f9d02f81b-kube-api-access-6l5kp\") pod \"downloads-7954f5f757-cn2nc\" (UID: \"d75d91b3-7800-4645-b272-768f9d02f81b\") " pod="openshift-console/downloads-7954f5f757-cn2nc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.620688 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cn2nc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.641534 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.673789 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.679362 4932 request.go:700] Waited for 1.359654297s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0 Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.681487 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.702396 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714082 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714217 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74a8d999-1731-4a72-8ca8-25913744a8e7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714246 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714264 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa8e769a-613b-40f2-9d07-b034d7871302-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nzrr6\" (UID: \"aa8e769a-613b-40f2-9d07-b034d7871302\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:17 crc kubenswrapper[4932]: E0218 19:36:17.714297 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.214268673 +0000 UTC m=+141.796223558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714339 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-certs\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714381 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bd39f7e2-211c-4104-a72d-5374a6e95ee1-srv-cert\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714423 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-trusted-ca-bundle\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vhpt\" (UniqueName: \"kubernetes.io/projected/bac9c1de-1cfe-48d3-aafc-ddb41647c661-kube-api-access-8vhpt\") pod \"ingress-canary-rmh4d\" (UID: \"bac9c1de-1cfe-48d3-aafc-ddb41647c661\") " pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714494 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714526 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prh8n\" (UniqueName: \"kubernetes.io/projected/93bf45fc-6447-479a-83d0-c9418ecb8270-kube-api-access-prh8n\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714597 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714682 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f446d\" (UniqueName: \"kubernetes.io/projected/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-kube-api-access-f446d\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715016 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6fc8d511-a907-4f74-9a1c-e262d684b6a5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715101 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkzpp\" (UniqueName: \"kubernetes.io/projected/df2da7f7-2427-4099-ba40-855a7e850256-kube-api-access-xkzpp\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715235 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715284 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/522d227a-c827-415e-9e8b-e5907ba83363-service-ca-bundle\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715333 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715383 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftj25\" (UniqueName: \"kubernetes.io/projected/522d227a-c827-415e-9e8b-e5907ba83363-kube-api-access-ftj25\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715428 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/998697c8-1e0d-46ae-b92f-ae8faf0faef5-proxy-tls\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715499 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-mountpoint-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715548 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-config\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715592 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-serving-cert\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715637 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4tqj\" (UniqueName: \"kubernetes.io/projected/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-kube-api-access-t4tqj\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715685 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/547cf2c3-4842-4d4e-ac24-8b2b1ec93a15-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xfmpj\" (UID: \"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715737 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-279w9\" (UniqueName: \"kubernetes.io/projected/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-kube-api-access-279w9\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715780 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgpm\" (UniqueName: \"kubernetes.io/projected/81931b41-8917-4936-9e02-52f7c8c0f1c1-kube-api-access-stgpm\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715828 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c5ca023-fc82-4365-b2f9-f57220013a9f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bldhq\" (UID: \"1c5ca023-fc82-4365-b2f9-f57220013a9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715854 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715876 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/715b331b-b140-461c-9a06-ba6ede3af8b6-apiservice-cert\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.716017 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6fc8d511-a907-4f74-9a1c-e262d684b6a5-images\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.716052 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c04fd14-9dfc-4c0f-8125-8663eac51a45-config\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.716081 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/715b331b-b140-461c-9a06-ba6ede3af8b6-tmpfs\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.716580 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.716808 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.716960 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxl8c\" (UniqueName: \"kubernetes.io/projected/1c5ca023-fc82-4365-b2f9-f57220013a9f-kube-api-access-qxl8c\") pod \"cluster-samples-operator-665b6dd947-bldhq\" (UID: \"1c5ca023-fc82-4365-b2f9-f57220013a9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717313 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-trusted-ca-bundle\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717556 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/998697c8-1e0d-46ae-b92f-ae8faf0faef5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717633 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/df2da7f7-2427-4099-ba40-855a7e850256-signing-cabundle\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717772 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717825 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c04fd14-9dfc-4c0f-8125-8663eac51a45-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717877 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh64w\" (UniqueName: \"kubernetes.io/projected/bd39f7e2-211c-4104-a72d-5374a6e95ee1-kube-api-access-nh64w\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717929 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/df2da7f7-2427-4099-ba40-855a7e850256-signing-key\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717940 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718034 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62pvb\" (UniqueName: \"kubernetes.io/projected/9a7e80fe-b260-461e-a11b-633a14eb304d-kube-api-access-62pvb\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718081 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-oauth-config\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718121 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwssc\" (UniqueName: \"kubernetes.io/projected/547cf2c3-4842-4d4e-ac24-8b2b1ec93a15-kube-api-access-xwssc\") pod \"package-server-manager-789f6589d5-xfmpj\" (UID: \"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718156 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93bf45fc-6447-479a-83d0-c9418ecb8270-serving-cert\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718222 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-image-import-ca\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718262 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l6wf\" (UniqueName: \"kubernetes.io/projected/47777b7a-7599-4366-8e0f-a2ddf382e6ef-kube-api-access-2l6wf\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718305 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48qgh\" (UniqueName: \"kubernetes.io/projected/3f0021b0-4c6c-4085-9819-5c94471f320c-kube-api-access-48qgh\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718348 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd23a7-1b44-440f-be4a-8c236cf8902b-serving-cert\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718393 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-etcd-serving-ca\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718429 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-default-certificate\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718464 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c04fd14-9dfc-4c0f-8125-8663eac51a45-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718517 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-serving-cert\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718550 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-config\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718583 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9a7e80fe-b260-461e-a11b-633a14eb304d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718617 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fccl8\" (UniqueName: \"kubernetes.io/projected/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-kube-api-access-fccl8\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718648 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-metrics-tls\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718689 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvqqv\" (UniqueName: \"kubernetes.io/projected/c7dff6ec-6703-40fb-a94a-c1d8b4641703-kube-api-access-bvqqv\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718723 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-service-ca\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718754 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-serving-cert\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718791 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mg62\" (UniqueName: \"kubernetes.io/projected/18e44919-11c5-4974-9c71-ff803e668247-kube-api-access-7mg62\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718822 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718852 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718886 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/048e17bc-05bf-40e4-9f40-87d936fcf772-config-volume\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718918 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3710240-88d7-4611-bd77-6de0c54c1e3c-config\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718950 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-policies\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718982 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719017 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c6e703e-85e3-4d17-a946-c17e42c27985-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxmcb\" (UID: \"2c6e703e-85e3-4d17-a946-c17e42c27985\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719055 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47777b7a-7599-4366-8e0f-a2ddf382e6ef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719104 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-etcd-serving-ca\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719107 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-stats-auth\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719163 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbggm\" (UniqueName: \"kubernetes.io/projected/aa8e769a-613b-40f2-9d07-b034d7871302-kube-api-access-sbggm\") pod \"multus-admission-controller-857f4d67dd-nzrr6\" (UID: \"aa8e769a-613b-40f2-9d07-b034d7871302\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719209 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81931b41-8917-4936-9e02-52f7c8c0f1c1-srv-cert\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719238 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719271 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a7e80fe-b260-461e-a11b-633a14eb304d-serving-cert\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719294 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-audit-policies\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-trusted-ca\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719341 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-node-bootstrap-token\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719363 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81931b41-8917-4936-9e02-52f7c8c0f1c1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719386 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f0021b0-4c6c-4085-9819-5c94471f320c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719445 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-client-ca\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719469 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5ksr\" (UniqueName: \"kubernetes.io/projected/998697c8-1e0d-46ae-b92f-ae8faf0faef5-kube-api-access-n5ksr\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719491 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3710240-88d7-4611-bd77-6de0c54c1e3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719520 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719544 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719566 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdlw8\" (UniqueName: \"kubernetes.io/projected/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-kube-api-access-sdlw8\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719590 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/715b331b-b140-461c-9a06-ba6ede3af8b6-webhook-cert\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719610 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719634 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-etcd-client\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719657 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfbq9\" (UniqueName: \"kubernetes.io/projected/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-kube-api-access-bfbq9\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719680 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-config\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719702 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx42h\" (UniqueName: \"kubernetes.io/projected/f9f46b79-f300-42de-a2c3-a35670822a3b-kube-api-access-mx42h\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719724 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-certificates\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719747 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-console-config\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719768 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93bf45fc-6447-479a-83d0-c9418ecb8270-config\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719790 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-csi-data-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719816 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-config\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719840 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-serving-cert\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719864 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-images\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719887 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctkmv\" (UniqueName: \"kubernetes.io/projected/2c6e703e-85e3-4d17-a946-c17e42c27985-kube-api-access-ctkmv\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxmcb\" (UID: \"2c6e703e-85e3-4d17-a946-c17e42c27985\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719914 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm4fr\" (UniqueName: \"kubernetes.io/projected/26869f13-c7ee-411c-85a1-72338142184c-kube-api-access-gm4fr\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719970 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47777b7a-7599-4366-8e0f-a2ddf382e6ef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719980 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-image-import-ca\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719997 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720025 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6bvz\" (UniqueName: \"kubernetes.io/projected/6fc8d511-a907-4f74-9a1c-e262d684b6a5-kube-api-access-q6bvz\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720051 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7dff6ec-6703-40fb-a94a-c1d8b4641703-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720077 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720100 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-bound-sa-token\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720123 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-oauth-serving-cert\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720145 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-socket-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720167 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-registration-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720208 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95t2r\" (UniqueName: \"kubernetes.io/projected/e6399c54-0b37-424f-8535-f8b0ab33ff52-kube-api-access-95t2r\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720232 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-encryption-config\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720255 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-audit-dir\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720289 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-metrics-tls\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720328 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-serving-cert\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720353 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbm4m\" (UniqueName: \"kubernetes.io/projected/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-kube-api-access-rbm4m\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720381 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dff6ec-6703-40fb-a94a-c1d8b4641703-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720405 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmwdb\" (UniqueName: \"kubernetes.io/projected/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-kube-api-access-kmwdb\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720441 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-node-pullsecrets\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720462 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-service-ca-bundle\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720485 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-metrics-certs\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720506 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a8d999-1731-4a72-8ca8-25913744a8e7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720531 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2lqf\" (UniqueName: \"kubernetes.io/projected/28fd23a7-1b44-440f-be4a-8c236cf8902b-kube-api-access-b2lqf\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720555 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720580 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720601 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720622 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-496qr\" (UniqueName: \"kubernetes.io/projected/048e17bc-05bf-40e4-9f40-87d936fcf772-kube-api-access-496qr\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720662 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47777b7a-7599-4366-8e0f-a2ddf382e6ef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720683 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/048e17bc-05bf-40e4-9f40-87d936fcf772-secret-volume\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720721 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720747 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h9pm\" (UniqueName: \"kubernetes.io/projected/6ed62cdb-a7e1-4366-88b7-7c2ed1102203-kube-api-access-7h9pm\") pod \"migrator-59844c95c7-z8hql\" (UID: \"6ed62cdb-a7e1-4366-88b7-7c2ed1102203\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720776 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-trusted-ca\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720799 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720824 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5ghj\" (UniqueName: \"kubernetes.io/projected/715b331b-b140-461c-9a06-ba6ede3af8b6-kube-api-access-d5ghj\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720866 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-config\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720890 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h82qp\" (UniqueName: \"kubernetes.io/projected/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-kube-api-access-h82qp\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720916 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-encryption-config\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720938 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f0021b0-4c6c-4085-9819-5c94471f320c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720962 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-auth-proxy-config\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720984 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f9pv\" (UniqueName: \"kubernetes.io/projected/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-kube-api-access-9f9pv\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721006 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/908b160b-0e48-4c2c-a35b-45fe25ca093f-config-volume\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721028 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bd39f7e2-211c-4104-a72d-5374a6e95ee1-profile-collector-cert\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721062 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6fc8d511-a907-4f74-9a1c-e262d684b6a5-proxy-tls\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721085 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-machine-approver-tls\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721107 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-dir\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721131 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-etcd-client\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721155 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vjgr\" (UniqueName: \"kubernetes.io/projected/908b160b-0e48-4c2c-a35b-45fe25ca093f-kube-api-access-6vjgr\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721199 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-plugins-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721225 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-trusted-ca\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721276 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a8d999-1731-4a72-8ca8-25913744a8e7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721307 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721353 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721378 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-config\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721425 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26869f13-c7ee-411c-85a1-72338142184c-audit-dir\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721447 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bac9c1de-1cfe-48d3-aafc-ddb41647c661-cert\") pod \"ingress-canary-rmh4d\" (UID: \"bac9c1de-1cfe-48d3-aafc-ddb41647c661\") " pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721497 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-audit\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721717 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-config\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721759 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/908b160b-0e48-4c2c-a35b-45fe25ca093f-metrics-tls\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721784 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3710240-88d7-4611-bd77-6de0c54c1e3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721811 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxwvv\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-kube-api-access-kxwvv\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721835 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-tls\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721861 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721885 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-client-ca\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721910 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18e44919-11c5-4974-9c71-ff803e668247-serving-cert\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.722087 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-config\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.722399 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9a7e80fe-b260-461e-a11b-633a14eb304d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.723057 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.723161 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-service-ca\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.723432 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-config\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.723752 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-client-ca\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.724332 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.724421 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-policies\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720780 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.724971 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-config\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.725390 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.726419 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c5ca023-fc82-4365-b2f9-f57220013a9f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bldhq\" (UID: \"1c5ca023-fc82-4365-b2f9-f57220013a9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.726882 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-serving-cert\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.726968 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-config\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.727613 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-etcd-client\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.727979 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-serving-cert\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.728110 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.728964 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-node-pullsecrets\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.729744 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-service-ca-bundle\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.729810 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-encryption-config\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: E0218 19:36:17.730114 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.230090836 +0000 UTC m=+141.812045721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.730367 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-auth-proxy-config\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.730672 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.731083 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-oauth-serving-cert\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.732116 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dff6ec-6703-40fb-a94a-c1d8b4641703-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.732352 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-audit-dir\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.732465 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-images\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.732694 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.736278 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47777b7a-7599-4366-8e0f-a2ddf382e6ef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.736605 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-oauth-config\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.736666 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7dff6ec-6703-40fb-a94a-c1d8b4641703-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.737668 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.737714 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-serving-cert\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.737987 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.738150 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.738515 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a7e80fe-b260-461e-a11b-633a14eb304d-serving-cert\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.738786 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd23a7-1b44-440f-be4a-8c236cf8902b-serving-cert\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.738882 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-encryption-config\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.738989 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.739801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-config\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.741358 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.741705 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.741887 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.741955 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-config\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.742279 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.742598 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-serving-cert\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.743328 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-dir\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.743736 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-audit\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.744764 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.745052 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-audit-policies\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.745145 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26869f13-c7ee-411c-85a1-72338142184c-audit-dir\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.746272 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-certificates\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.746764 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-console-config\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.748847 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-trusted-ca\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.748931 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-trusted-ca\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.750142 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.751757 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47777b7a-7599-4366-8e0f-a2ddf382e6ef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.752986 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-serving-cert\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.754264 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-etcd-client\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.754799 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-tls\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.760885 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.764963 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-client-ca\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.786895 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.787972 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.798770 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-config\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.801337 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.812629 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.821456 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.822849 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823013 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/998697c8-1e0d-46ae-b92f-ae8faf0faef5-proxy-tls\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823047 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftj25\" (UniqueName: \"kubernetes.io/projected/522d227a-c827-415e-9e8b-e5907ba83363-kube-api-access-ftj25\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: E0218 19:36:17.823076 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.323045548 +0000 UTC m=+141.905000433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823122 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-mountpoint-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823223 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/547cf2c3-4842-4d4e-ac24-8b2b1ec93a15-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xfmpj\" (UID: \"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823256 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-mountpoint-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823274 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stgpm\" (UniqueName: \"kubernetes.io/projected/81931b41-8917-4936-9e02-52f7c8c0f1c1-kube-api-access-stgpm\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823311 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/715b331b-b140-461c-9a06-ba6ede3af8b6-apiservice-cert\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823342 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6fc8d511-a907-4f74-9a1c-e262d684b6a5-images\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823375 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c04fd14-9dfc-4c0f-8125-8663eac51a45-config\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823407 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/715b331b-b140-461c-9a06-ba6ede3af8b6-tmpfs\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823455 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/998697c8-1e0d-46ae-b92f-ae8faf0faef5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823489 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/df2da7f7-2427-4099-ba40-855a7e850256-signing-cabundle\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823523 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c04fd14-9dfc-4c0f-8125-8663eac51a45-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823555 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh64w\" (UniqueName: \"kubernetes.io/projected/bd39f7e2-211c-4104-a72d-5374a6e95ee1-kube-api-access-nh64w\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823589 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/df2da7f7-2427-4099-ba40-855a7e850256-signing-key\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823656 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwssc\" (UniqueName: \"kubernetes.io/projected/547cf2c3-4842-4d4e-ac24-8b2b1ec93a15-kube-api-access-xwssc\") pod \"package-server-manager-789f6589d5-xfmpj\" (UID: \"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823687 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93bf45fc-6447-479a-83d0-c9418ecb8270-serving-cert\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823725 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48qgh\" (UniqueName: \"kubernetes.io/projected/3f0021b0-4c6c-4085-9819-5c94471f320c-kube-api-access-48qgh\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823775 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-default-certificate\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823804 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c04fd14-9dfc-4c0f-8125-8663eac51a45-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823842 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fccl8\" (UniqueName: \"kubernetes.io/projected/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-kube-api-access-fccl8\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823902 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-metrics-tls\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823938 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/048e17bc-05bf-40e4-9f40-87d936fcf772-config-volume\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823969 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3710240-88d7-4611-bd77-6de0c54c1e3c-config\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824003 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/715b331b-b140-461c-9a06-ba6ede3af8b6-tmpfs\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824013 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c6e703e-85e3-4d17-a946-c17e42c27985-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxmcb\" (UID: \"2c6e703e-85e3-4d17-a946-c17e42c27985\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824100 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-stats-auth\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824130 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbggm\" (UniqueName: \"kubernetes.io/projected/aa8e769a-613b-40f2-9d07-b034d7871302-kube-api-access-sbggm\") pod \"multus-admission-controller-857f4d67dd-nzrr6\" (UID: \"aa8e769a-613b-40f2-9d07-b034d7871302\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824154 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81931b41-8917-4936-9e02-52f7c8c0f1c1-srv-cert\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824205 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-trusted-ca\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824227 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-node-bootstrap-token\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824255 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81931b41-8917-4936-9e02-52f7c8c0f1c1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824275 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f0021b0-4c6c-4085-9819-5c94471f320c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824303 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5ksr\" (UniqueName: \"kubernetes.io/projected/998697c8-1e0d-46ae-b92f-ae8faf0faef5-kube-api-access-n5ksr\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824324 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3710240-88d7-4611-bd77-6de0c54c1e3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824347 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdlw8\" (UniqueName: \"kubernetes.io/projected/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-kube-api-access-sdlw8\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824367 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/715b331b-b140-461c-9a06-ba6ede3af8b6-webhook-cert\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824393 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824435 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93bf45fc-6447-479a-83d0-c9418ecb8270-config\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824456 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-csi-data-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824484 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctkmv\" (UniqueName: \"kubernetes.io/projected/2c6e703e-85e3-4d17-a946-c17e42c27985-kube-api-access-ctkmv\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxmcb\" (UID: \"2c6e703e-85e3-4d17-a946-c17e42c27985\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824522 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6bvz\" (UniqueName: \"kubernetes.io/projected/6fc8d511-a907-4f74-9a1c-e262d684b6a5-kube-api-access-q6bvz\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824544 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-socket-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824564 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-registration-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824585 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95t2r\" (UniqueName: \"kubernetes.io/projected/e6399c54-0b37-424f-8535-f8b0ab33ff52-kube-api-access-95t2r\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824607 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824638 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-metrics-tls\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824672 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbm4m\" (UniqueName: \"kubernetes.io/projected/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-kube-api-access-rbm4m\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824705 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-metrics-certs\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824726 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a8d999-1731-4a72-8ca8-25913744a8e7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824758 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824782 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-496qr\" (UniqueName: \"kubernetes.io/projected/048e17bc-05bf-40e4-9f40-87d936fcf772-kube-api-access-496qr\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824811 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/048e17bc-05bf-40e4-9f40-87d936fcf772-secret-volume\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824835 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h9pm\" (UniqueName: \"kubernetes.io/projected/6ed62cdb-a7e1-4366-88b7-7c2ed1102203-kube-api-access-7h9pm\") pod \"migrator-59844c95c7-z8hql\" (UID: \"6ed62cdb-a7e1-4366-88b7-7c2ed1102203\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824859 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5ghj\" (UniqueName: \"kubernetes.io/projected/715b331b-b140-461c-9a06-ba6ede3af8b6-kube-api-access-d5ghj\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824846 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c04fd14-9dfc-4c0f-8125-8663eac51a45-config\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824891 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f0021b0-4c6c-4085-9819-5c94471f320c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824915 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/908b160b-0e48-4c2c-a35b-45fe25ca093f-config-volume\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824938 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bd39f7e2-211c-4104-a72d-5374a6e95ee1-profile-collector-cert\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824964 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f9pv\" (UniqueName: \"kubernetes.io/projected/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-kube-api-access-9f9pv\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824987 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6fc8d511-a907-4f74-9a1c-e262d684b6a5-proxy-tls\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825011 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vjgr\" (UniqueName: \"kubernetes.io/projected/908b160b-0e48-4c2c-a35b-45fe25ca093f-kube-api-access-6vjgr\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825032 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-plugins-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825065 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bac9c1de-1cfe-48d3-aafc-ddb41647c661-cert\") pod \"ingress-canary-rmh4d\" (UID: \"bac9c1de-1cfe-48d3-aafc-ddb41647c661\") " pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825087 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a8d999-1731-4a72-8ca8-25913744a8e7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825117 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825143 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/908b160b-0e48-4c2c-a35b-45fe25ca093f-metrics-tls\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825164 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3710240-88d7-4611-bd77-6de0c54c1e3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825230 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa8e769a-613b-40f2-9d07-b034d7871302-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nzrr6\" (UID: \"aa8e769a-613b-40f2-9d07-b034d7871302\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825252 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-certs\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825272 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74a8d999-1731-4a72-8ca8-25913744a8e7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825298 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vhpt\" (UniqueName: \"kubernetes.io/projected/bac9c1de-1cfe-48d3-aafc-ddb41647c661-kube-api-access-8vhpt\") pod \"ingress-canary-rmh4d\" (UID: \"bac9c1de-1cfe-48d3-aafc-ddb41647c661\") " pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825320 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bd39f7e2-211c-4104-a72d-5374a6e95ee1-srv-cert\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825346 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prh8n\" (UniqueName: \"kubernetes.io/projected/93bf45fc-6447-479a-83d0-c9418ecb8270-kube-api-access-prh8n\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825380 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6fc8d511-a907-4f74-9a1c-e262d684b6a5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825404 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzpp\" (UniqueName: \"kubernetes.io/projected/df2da7f7-2427-4099-ba40-855a7e850256-kube-api-access-xkzpp\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825434 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/522d227a-c827-415e-9e8b-e5907ba83363-service-ca-bundle\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.826332 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/522d227a-c827-415e-9e8b-e5907ba83363-service-ca-bundle\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.826925 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-csi-data-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.827290 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-socket-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.827352 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-registration-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.827876 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93bf45fc-6447-479a-83d0-c9418ecb8270-config\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.828718 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/715b331b-b140-461c-9a06-ba6ede3af8b6-apiservice-cert\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.829621 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/547cf2c3-4842-4d4e-ac24-8b2b1ec93a15-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xfmpj\" (UID: \"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.830333 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-trusted-ca\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.831028 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.831410 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/df2da7f7-2427-4099-ba40-855a7e850256-signing-cabundle\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.831989 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81931b41-8917-4936-9e02-52f7c8c0f1c1-srv-cert\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.832111 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-stats-auth\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.832555 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/998697c8-1e0d-46ae-b92f-ae8faf0faef5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.833828 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-node-bootstrap-token\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.835345 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81931b41-8917-4936-9e02-52f7c8c0f1c1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.835413 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/048e17bc-05bf-40e4-9f40-87d936fcf772-secret-volume\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.835581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6fc8d511-a907-4f74-9a1c-e262d684b6a5-images\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.835810 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93bf45fc-6447-479a-83d0-c9418ecb8270-serving-cert\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.835995 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/998697c8-1e0d-46ae-b92f-ae8faf0faef5-proxy-tls\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.836237 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f0021b0-4c6c-4085-9819-5c94471f320c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.836377 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3710240-88d7-4611-bd77-6de0c54c1e3c-config\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.836579 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/048e17bc-05bf-40e4-9f40-87d936fcf772-config-volume\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.836889 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a8d999-1731-4a72-8ca8-25913744a8e7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.837138 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-metrics-tls\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.837490 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c6e703e-85e3-4d17-a946-c17e42c27985-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxmcb\" (UID: \"2c6e703e-85e3-4d17-a946-c17e42c27985\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.837520 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/908b160b-0e48-4c2c-a35b-45fe25ca093f-config-volume\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.837532 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c04fd14-9dfc-4c0f-8125-8663eac51a45-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.838391 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.839573 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3710240-88d7-4611-bd77-6de0c54c1e3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.840121 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-metrics-tls\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.840243 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6fc8d511-a907-4f74-9a1c-e262d684b6a5-proxy-tls\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.840345 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-plugins-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.840481 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6fc8d511-a907-4f74-9a1c-e262d684b6a5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.840640 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-metrics-certs\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: E0218 19:36:17.840936 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.340917146 +0000 UTC m=+141.922872081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.841611 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.843249 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/908b160b-0e48-4c2c-a35b-45fe25ca093f-metrics-tls\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.843825 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a8d999-1731-4a72-8ca8-25913744a8e7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.844264 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f0021b0-4c6c-4085-9819-5c94471f320c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.844553 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bd39f7e2-211c-4104-a72d-5374a6e95ee1-srv-cert\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.844664 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/df2da7f7-2427-4099-ba40-855a7e850256-signing-key\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.845234 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-default-certificate\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.845357 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/715b331b-b140-461c-9a06-ba6ede3af8b6-webhook-cert\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.845489 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa8e769a-613b-40f2-9d07-b034d7871302-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nzrr6\" (UID: \"aa8e769a-613b-40f2-9d07-b034d7871302\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.845756 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-certs\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.845964 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bac9c1de-1cfe-48d3-aafc-ddb41647c661-cert\") pod \"ingress-canary-rmh4d\" (UID: \"bac9c1de-1cfe-48d3-aafc-ddb41647c661\") " pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.845978 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18e44919-11c5-4974-9c71-ff803e668247-serving-cert\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.851486 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bd39f7e2-211c-4104-a72d-5374a6e95ee1-profile-collector-cert\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.861221 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.881756 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.887212 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-machine-approver-tls\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.927225 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:17 crc kubenswrapper[4932]: E0218 19:36:17.927543 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.427514347 +0000 UTC m=+142.009469202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.937009 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f446d\" (UniqueName: \"kubernetes.io/projected/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-kube-api-access-f446d\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.957824 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st"] Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.962245 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4tqj\" (UniqueName: \"kubernetes.io/projected/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-kube-api-access-t4tqj\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.981564 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxl8c\" (UniqueName: \"kubernetes.io/projected/1c5ca023-fc82-4365-b2f9-f57220013a9f-kube-api-access-qxl8c\") pod \"cluster-samples-operator-665b6dd947-bldhq\" (UID: \"1c5ca023-fc82-4365-b2f9-f57220013a9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.995496 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hphc8"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.000818 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62pvb\" (UniqueName: \"kubernetes.io/projected/9a7e80fe-b260-461e-a11b-633a14eb304d-kube-api-access-62pvb\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:18 crc kubenswrapper[4932]: W0218 19:36:18.014609 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a072d2a_dd0d_4fe3_a7d2_f5baaa9df95e.slice/crio-37dfb69bdbece6f1263e97ad0156a7fb06c297ea73fbcc0afa382c6e4704d527 WatchSource:0}: Error finding container 37dfb69bdbece6f1263e97ad0156a7fb06c297ea73fbcc0afa382c6e4704d527: Status 404 returned error can't find the container with id 37dfb69bdbece6f1263e97ad0156a7fb06c297ea73fbcc0afa382c6e4704d527 Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.015160 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l6wf\" (UniqueName: \"kubernetes.io/projected/47777b7a-7599-4366-8e0f-a2ddf382e6ef-kube-api-access-2l6wf\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.023008 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" event={"ID":"09b62af0-116d-4918-a691-e7040fd7dc22","Type":"ContainerStarted","Data":"a1b9ddb4e29529281b7db75aec531d1deedee40e13771990e37f0211d1d80b71"} Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.024107 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" event={"ID":"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e","Type":"ContainerStarted","Data":"37dfb69bdbece6f1263e97ad0156a7fb06c297ea73fbcc0afa382c6e4704d527"} Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.030426 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.030887 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.53086812 +0000 UTC m=+142.112822955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.036383 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfbq9\" (UniqueName: \"kubernetes.io/projected/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-kube-api-access-bfbq9\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.060889 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvqqv\" (UniqueName: \"kubernetes.io/projected/c7dff6ec-6703-40fb-a94a-c1d8b4641703-kube-api-access-bvqqv\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.070689 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.073313 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cn2nc"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.073345 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.079036 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mg62\" (UniqueName: \"kubernetes.io/projected/18e44919-11c5-4974-9c71-ff803e668247-kube-api-access-7mg62\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.098691 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h82qp\" (UniqueName: \"kubernetes.io/projected/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-kube-api-access-h82qp\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.105706 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.113954 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2lqf\" (UniqueName: \"kubernetes.io/projected/28fd23a7-1b44-440f-be4a-8c236cf8902b-kube-api-access-b2lqf\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.125536 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.130934 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.131120 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.631090394 +0000 UTC m=+142.213045249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.131580 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.132043 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.632036035 +0000 UTC m=+142.213990880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.132860 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.135261 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmwdb\" (UniqueName: \"kubernetes.io/projected/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-kube-api-access-kmwdb\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.158807 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47777b7a-7599-4366-8e0f-a2ddf382e6ef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.175616 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-279w9\" (UniqueName: \"kubernetes.io/projected/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-kube-api-access-279w9\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.195810 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-bound-sa-token\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.214992 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxwvv\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-kube-api-access-kxwvv\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.236774 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.237442 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.737418805 +0000 UTC m=+142.319373650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.237540 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.237590 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx42h\" (UniqueName: \"kubernetes.io/projected/f9f46b79-f300-42de-a2c3-a35670822a3b-kube-api-access-mx42h\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.263900 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm4fr\" (UniqueName: \"kubernetes.io/projected/26869f13-c7ee-411c-85a1-72338142184c-kube-api-access-gm4fr\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.278304 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftj25\" (UniqueName: \"kubernetes.io/projected/522d227a-c827-415e-9e8b-e5907ba83363-kube-api-access-ftj25\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.282249 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.289762 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.291459 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pj7mv"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.297215 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stgpm\" (UniqueName: \"kubernetes.io/projected/81931b41-8917-4936-9e02-52f7c8c0f1c1-kube-api-access-stgpm\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.298961 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.308568 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.318279 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c04fd14-9dfc-4c0f-8125-8663eac51a45-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.349069 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.349562 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.849547874 +0000 UTC m=+142.431502719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.350089 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.351422 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.368657 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.371282 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.378386 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwssc\" (UniqueName: \"kubernetes.io/projected/547cf2c3-4842-4d4e-ac24-8b2b1ec93a15-kube-api-access-xwssc\") pod \"package-server-manager-789f6589d5-xfmpj\" (UID: \"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.397519 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.397916 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctkmv\" (UniqueName: \"kubernetes.io/projected/2c6e703e-85e3-4d17-a946-c17e42c27985-kube-api-access-ctkmv\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxmcb\" (UID: \"2c6e703e-85e3-4d17-a946-c17e42c27985\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.421843 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6bvz\" (UniqueName: \"kubernetes.io/projected/6fc8d511-a907-4f74-9a1c-e262d684b6a5-kube-api-access-q6bvz\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.432453 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95t2r\" (UniqueName: \"kubernetes.io/projected/e6399c54-0b37-424f-8535-f8b0ab33ff52-kube-api-access-95t2r\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.438909 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbggm\" (UniqueName: \"kubernetes.io/projected/aa8e769a-613b-40f2-9d07-b034d7871302-kube-api-access-sbggm\") pod \"multus-admission-controller-857f4d67dd-nzrr6\" (UID: \"aa8e769a-613b-40f2-9d07-b034d7871302\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.455448 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.455959 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.955944326 +0000 UTC m=+142.537899171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.459691 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.462231 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5ksr\" (UniqueName: \"kubernetes.io/projected/998697c8-1e0d-46ae-b92f-ae8faf0faef5-kube-api-access-n5ksr\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.473459 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.475366 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g2qvz"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.483617 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdlw8\" (UniqueName: \"kubernetes.io/projected/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-kube-api-access-sdlw8\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.489726 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.503506 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3710240-88d7-4611-bd77-6de0c54c1e3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.512316 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.514117 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.517793 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-496qr\" (UniqueName: \"kubernetes.io/projected/048e17bc-05bf-40e4-9f40-87d936fcf772-kube-api-access-496qr\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.518738 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.522868 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.536773 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.539846 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48qgh\" (UniqueName: \"kubernetes.io/projected/3f0021b0-4c6c-4085-9819-5c94471f320c-kube-api-access-48qgh\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:18 crc kubenswrapper[4932]: W0218 19:36:18.545487 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcbb6fa7_ef01_48aa_8ac8_ba4bb47d1ffc.slice/crio-f8f2d09121f9073bf68d4254eaf605a1ee162ce9c89249824fad9065e6c0c34a WatchSource:0}: Error finding container f8f2d09121f9073bf68d4254eaf605a1ee162ce9c89249824fad9065e6c0c34a: Status 404 returned error can't find the container with id f8f2d09121f9073bf68d4254eaf605a1ee162ce9c89249824fad9065e6c0c34a Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.552956 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.554339 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.566445 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.567006 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.584707 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fccl8\" (UniqueName: \"kubernetes.io/projected/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-kube-api-access-fccl8\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.586114 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fccl8\" (UniqueName: \"kubernetes.io/projected/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-kube-api-access-fccl8\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.586438 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.589482 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.089305298 +0000 UTC m=+142.671260143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.590524 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fccl8\" (UniqueName: \"kubernetes.io/projected/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-kube-api-access-fccl8\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.593685 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.597462 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gkgsj"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.602870 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.607306 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5ghj\" (UniqueName: \"kubernetes.io/projected/715b331b-b140-461c-9a06-ba6ede3af8b6-kube-api-access-d5ghj\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.611737 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h9pm\" (UniqueName: \"kubernetes.io/projected/6ed62cdb-a7e1-4366-88b7-7c2ed1102203-kube-api-access-7h9pm\") pod \"migrator-59844c95c7-z8hql\" (UID: \"6ed62cdb-a7e1-4366-88b7-7c2ed1102203\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.619372 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.633604 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh64w\" (UniqueName: \"kubernetes.io/projected/bd39f7e2-211c-4104-a72d-5374a6e95ee1-kube-api-access-nh64w\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.642098 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbm4m\" (UniqueName: \"kubernetes.io/projected/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-kube-api-access-rbm4m\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.660209 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vjgr\" (UniqueName: \"kubernetes.io/projected/908b160b-0e48-4c2c-a35b-45fe25ca093f-kube-api-access-6vjgr\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.688265 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.688658 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.188642833 +0000 UTC m=+142.770597678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.694366 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f9pv\" (UniqueName: \"kubernetes.io/projected/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-kube-api-access-9f9pv\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.705782 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vhpt\" (UniqueName: \"kubernetes.io/projected/bac9c1de-1cfe-48d3-aafc-ddb41647c661-kube-api-access-8vhpt\") pod \"ingress-canary-rmh4d\" (UID: \"bac9c1de-1cfe-48d3-aafc-ddb41647c661\") " pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.726711 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prh8n\" (UniqueName: \"kubernetes.io/projected/93bf45fc-6447-479a-83d0-c9418ecb8270-kube-api-access-prh8n\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.742695 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzpp\" (UniqueName: \"kubernetes.io/projected/df2da7f7-2427-4099-ba40-855a7e850256-kube-api-access-xkzpp\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:18 crc kubenswrapper[4932]: W0218 19:36:18.742868 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18e44919_11c5_4974_9c71_ff803e668247.slice/crio-aa724fc4a2394799ac8478df313683148bbb44ac563a7fa7a5bf6e498abd0bc7 WatchSource:0}: Error finding container aa724fc4a2394799ac8478df313683148bbb44ac563a7fa7a5bf6e498abd0bc7: Status 404 returned error can't find the container with id aa724fc4a2394799ac8478df313683148bbb44ac563a7fa7a5bf6e498abd0bc7 Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.745986 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.753267 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.769937 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74a8d999-1731-4a72-8ca8-25913744a8e7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.788729 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.791996 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.792787 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.292765734 +0000 UTC m=+142.874720589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.809510 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.831024 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.844610 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.871504 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.876445 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.889135 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.894273 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.894458 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.39443263 +0000 UTC m=+142.976387475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.894680 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.895138 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.395130595 +0000 UTC m=+142.977085430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.896299 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.954326 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.997937 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.998087 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.498045129 +0000 UTC m=+143.079999974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.998447 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.998848 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.498829017 +0000 UTC m=+143.080783862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.003495 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jr49c"] Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.010990 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fgjll"] Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.030499 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xnxl9"] Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.032518 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8xrbm" event={"ID":"522d227a-c827-415e-9e8b-e5907ba83363","Type":"ContainerStarted","Data":"c19059a5247644dfb6f6673b50a243ae81f65e5ead5c8b30eb3d1f15b80a72b8"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.034859 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" event={"ID":"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b","Type":"ContainerStarted","Data":"9c58cdc919b520b7f14ab596c4bccc96d30d431bc8ff152393a702a7d052edb3"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.035882 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" event={"ID":"9a7e80fe-b260-461e-a11b-633a14eb304d","Type":"ContainerStarted","Data":"312f3f14ea087659b2bafcf65b0d3e93238060becb67f2c1bc28e39bfb82c2d5"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.037511 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" event={"ID":"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a","Type":"ContainerStarted","Data":"2f6d8ca4a742b788eba554b61e535dd067b567bf7163e507a0bd42a1e40a120e"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.037557 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" event={"ID":"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a","Type":"ContainerStarted","Data":"b836577709b6122069d93f05915cfe6784ba1b6d249407c38ab2b2e650fd914d"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.046481 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" event={"ID":"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e","Type":"ContainerStarted","Data":"b3050cbcf8c7bdd94e967767973a040ad00d2a27262f0f0929d358af15295afd"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.050803 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" event={"ID":"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc","Type":"ContainerStarted","Data":"f8f2d09121f9073bf68d4254eaf605a1ee162ce9c89249824fad9065e6c0c34a"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.052681 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" event={"ID":"28fd23a7-1b44-440f-be4a-8c236cf8902b","Type":"ContainerStarted","Data":"7444c4d3cedc79cae24f1e017b9fa1b3385d64a4dc475008ab7f7a213fdab561"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.054849 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" event={"ID":"09b62af0-116d-4918-a691-e7040fd7dc22","Type":"ContainerStarted","Data":"3cd379069faa7198662e37cd80ce297bf474753981df30f92eca5bcc49bd1703"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.059763 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" event={"ID":"18e44919-11c5-4974-9c71-ff803e668247","Type":"ContainerStarted","Data":"aa724fc4a2394799ac8478df313683148bbb44ac563a7fa7a5bf6e498abd0bc7"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.062028 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jsz8m" event={"ID":"581a9ff6-cf7b-4bac-bd81-41c6fb080f36","Type":"ContainerStarted","Data":"c43c48a5c0c4932771fe306dfce5d7e2345370d619671714a1bb357bd1e97e73"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.063165 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cn2nc" event={"ID":"d75d91b3-7800-4645-b272-768f9d02f81b","Type":"ContainerStarted","Data":"cab4a223ec7f156131206c10d378ba9415f29c4c714e115bedc13aa7d4ccf4f7"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.063203 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cn2nc" event={"ID":"d75d91b3-7800-4645-b272-768f9d02f81b","Type":"ContainerStarted","Data":"199659edc1e3b267f89184c2e65fda9d42bc658e582217b6947492d14b691cd4"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.063620 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-cn2nc" Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.064313 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" event={"ID":"1c5ca023-fc82-4365-b2f9-f57220013a9f","Type":"ContainerStarted","Data":"928be0de59e62712d1016992d5448bf8b41eeb18ffaf06107fcdf7dc628a218a"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.065774 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-cn2nc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.065815 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cn2nc" podUID="d75d91b3-7800-4645-b272-768f9d02f81b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.067139 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.099269 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.099872 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.599842668 +0000 UTC m=+143.181797513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.201092 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.202948 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.702935287 +0000 UTC m=+143.284890232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.304989 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.305784 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.805768089 +0000 UTC m=+143.387722934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.407050 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.408218 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.908203231 +0000 UTC m=+143.490158076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.510111 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.512135 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.012098437 +0000 UTC m=+143.594053282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.611733 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.612236 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.112218909 +0000 UTC m=+143.694173754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.712641 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.713102 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.213084957 +0000 UTC m=+143.795039802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.782736 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" podStartSLOduration=119.782716929 podStartE2EDuration="1m59.782716929s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:19.750303717 +0000 UTC m=+143.332258572" watchObservedRunningTime="2026-02-18 19:36:19.782716929 +0000 UTC m=+143.364671774" Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.814188 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.814673 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.314661701 +0000 UTC m=+143.896616546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.853971 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" podStartSLOduration=119.853955747 podStartE2EDuration="1m59.853955747s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:19.85364718 +0000 UTC m=+143.435602045" watchObservedRunningTime="2026-02-18 19:36:19.853955747 +0000 UTC m=+143.435910592" Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.920899 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.921158 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.421134255 +0000 UTC m=+144.003089100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.921748 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.922329 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.422321851 +0000 UTC m=+144.004276696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.027421 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.027727 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.52771262 +0000 UTC m=+144.109667465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.031491 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms"] Feb 18 19:36:20 crc kubenswrapper[4932]: W0218 19:36:20.040879 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47777b7a_7599_4366_8e0f_a2ddf382e6ef.slice/crio-4fef3ca7d86fdecc1848945781c5b35bf05924569e6becee4d9139e644b17c11 WatchSource:0}: Error finding container 4fef3ca7d86fdecc1848945781c5b35bf05924569e6becee4d9139e644b17c11: Status 404 returned error can't find the container with id 4fef3ca7d86fdecc1848945781c5b35bf05924569e6becee4d9139e644b17c11 Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.051384 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.083379 4932 generic.go:334] "Generic (PLEG): container finished" podID="9a7e80fe-b260-461e-a11b-633a14eb304d" containerID="90d5be797db248d2640057ff04b30de72a2804fef0c0b456b194cc9f8c67977e" exitCode=0 Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.083822 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" event={"ID":"9a7e80fe-b260-461e-a11b-633a14eb304d","Type":"ContainerDied","Data":"90d5be797db248d2640057ff04b30de72a2804fef0c0b456b194cc9f8c67977e"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.119460 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jsz8m" event={"ID":"581a9ff6-cf7b-4bac-bd81-41c6fb080f36","Type":"ContainerStarted","Data":"ad441ed7a35730cadd8e8588e7a6cfffb9f3bb89aaef56283fd32a80054565e1"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.121951 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" event={"ID":"215a0eae-8c5b-4b0e-86f6-056bc6f696ff","Type":"ContainerStarted","Data":"6382a3d82fd4779d69e56bae634baaed056f7a56ccaabda7fcfd83e4fe75fc34"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.121995 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" event={"ID":"215a0eae-8c5b-4b0e-86f6-056bc6f696ff","Type":"ContainerStarted","Data":"62838236ab987cac95945631bbd754af35252c7b859d7a4d83e36fd02b26a5f7"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.122950 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.124677 4932 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-xnxl9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.124715 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" podUID="215a0eae-8c5b-4b0e-86f6-056bc6f696ff" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.129002 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.129324 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.629312965 +0000 UTC m=+144.211267810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.136317 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8xrbm" event={"ID":"522d227a-c827-415e-9e8b-e5907ba83363","Type":"ContainerStarted","Data":"1ae97dc99305a261499f98c429df93a066a975f5e922cc42247e61773c47e815"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.138782 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" event={"ID":"47777b7a-7599-4366-8e0f-a2ddf382e6ef","Type":"ContainerStarted","Data":"4fef3ca7d86fdecc1848945781c5b35bf05924569e6becee4d9139e644b17c11"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.143100 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" event={"ID":"18e44919-11c5-4974-9c71-ff803e668247","Type":"ContainerStarted","Data":"92c4e7fb68e8f7dfb6986ed0cee4d733efb5ba5235fa8329b6cb5754629a9a84"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.150236 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.156934 4932 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-gkgsj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.156998 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" podUID="18e44919-11c5-4974-9c71-ff803e668247" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.159689 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" event={"ID":"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b","Type":"ContainerStarted","Data":"b4459f764e16527c18a51250cce5e306ee30ac9825f851a1c5c2d0d62885ff09"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.159720 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" event={"ID":"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b","Type":"ContainerStarted","Data":"9aa902ec21846d5b8a2bdc23ea50f55f410c73ae16b4d511a2392d4478696a00"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.212942 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" podStartSLOduration=121.212921539 podStartE2EDuration="2m1.212921539s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.212478959 +0000 UTC m=+143.794433804" watchObservedRunningTime="2026-02-18 19:36:20.212921539 +0000 UTC m=+143.794876384" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.214580 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-cn2nc" podStartSLOduration=120.214570495 podStartE2EDuration="2m0.214570495s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.189272142 +0000 UTC m=+143.771226987" watchObservedRunningTime="2026-02-18 19:36:20.214570495 +0000 UTC m=+143.796525340" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.221507 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" event={"ID":"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc","Type":"ContainerStarted","Data":"7ae75ec1437d0c88ac24b9a4a8b92017729c372e0745157e709e7ca5214ae506"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.221551 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" event={"ID":"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc","Type":"ContainerStarted","Data":"067eecdb931c7ca9cb9ad3084b5746917be92d6e7400df1d520f86f7f2c84913"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.234469 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.257668 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.757640146 +0000 UTC m=+144.339594991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.263843 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" event={"ID":"28fd23a7-1b44-440f-be4a-8c236cf8902b","Type":"ContainerStarted","Data":"34ae58b97ea4a3420f81b7dbc9be8a4d3eb79fc358a2e2df9dc60f04b8d15203"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.265350 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.281640 4932 generic.go:334] "Generic (PLEG): container finished" podID="7a63a8af-95ca-447b-9bfa-7aec1033c0b3" containerID="818fbc246e74599791c548f8ec9f674bd2cce62aac0db87936992de325d1643e" exitCode=0 Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.281762 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" event={"ID":"7a63a8af-95ca-447b-9bfa-7aec1033c0b3","Type":"ContainerDied","Data":"818fbc246e74599791c548f8ec9f674bd2cce62aac0db87936992de325d1643e"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.282109 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" event={"ID":"7a63a8af-95ca-447b-9bfa-7aec1033c0b3","Type":"ContainerStarted","Data":"e73beb7e6c2d61c347018ebdfd420613ee7cffe12afd787cca60e29db574a674"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.299092 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgjll" event={"ID":"f9f46b79-f300-42de-a2c3-a35670822a3b","Type":"ContainerStarted","Data":"fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.299135 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgjll" event={"ID":"f9f46b79-f300-42de-a2c3-a35670822a3b","Type":"ContainerStarted","Data":"a258bd567aafbecb3f6618d81a779cce26f985331e18b4b996cf0d535bef2a19"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.340200 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" event={"ID":"1c5ca023-fc82-4365-b2f9-f57220013a9f","Type":"ContainerStarted","Data":"93c2e90186a48188f20f6fe25a5f2948a1fa9b792d3dc9cc770299443b2c0f06"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.340249 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" event={"ID":"1c5ca023-fc82-4365-b2f9-f57220013a9f","Type":"ContainerStarted","Data":"d44f0518152106c3929a849bbc675996d033ea6cfa57c5c9e31da9fb8353aa55"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.342756 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-cn2nc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.342801 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cn2nc" podUID="d75d91b3-7800-4645-b272-768f9d02f81b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.360045 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.361356 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.861322007 +0000 UTC m=+144.443276852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.461068 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.461444 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.462249 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.463953 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.963937604 +0000 UTC m=+144.545892449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.466578 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.476705 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:20 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:20 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:20 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.476764 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.479952 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dpln6"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.481946 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5"] Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.564393 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.064382633 +0000 UTC m=+144.646337478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.564166 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.616380 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" podStartSLOduration=121.616363892 podStartE2EDuration="2m1.616363892s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.614149562 +0000 UTC m=+144.196104397" watchObservedRunningTime="2026-02-18 19:36:20.616363892 +0000 UTC m=+144.198318737" Feb 18 19:36:20 crc kubenswrapper[4932]: W0218 19:36:20.639868 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod048e17bc_05bf_40e4_9f40_87d936fcf772.slice/crio-9614a2e59be43910c023c657f9503a561993abaeac9a3f60c668134e54e7399b WatchSource:0}: Error finding container 9614a2e59be43910c023c657f9503a561993abaeac9a3f60c668134e54e7399b: Status 404 returned error can't find the container with id 9614a2e59be43910c023c657f9503a561993abaeac9a3f60c668134e54e7399b Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.662241 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-845v8"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.665687 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.666138 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.166124391 +0000 UTC m=+144.748079236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.673353 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.705984 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.706427 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.738718 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.767108 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.767505 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.267493671 +0000 UTC m=+144.849448516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.768856 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f874p"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.791862 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.795032 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.806670 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.828863 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.851889 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nzrr6"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.851929 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5c79p"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.855974 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nqdfv"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.860690 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.860903 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.862018 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jsz8m" podStartSLOduration=5.862006257 podStartE2EDuration="5.862006257s" podCreationTimestamp="2026-02-18 19:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.6835864 +0000 UTC m=+144.265541245" watchObservedRunningTime="2026-02-18 19:36:20.862006257 +0000 UTC m=+144.443961102" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.864698 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-8xrbm" podStartSLOduration=120.864688997 podStartE2EDuration="2m0.864688997s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.730653759 +0000 UTC m=+144.312608604" watchObservedRunningTime="2026-02-18 19:36:20.864688997 +0000 UTC m=+144.446643842" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.867346 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bch48"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.868588 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" podStartSLOduration=121.868576664 podStartE2EDuration="2m1.868576664s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.76208199 +0000 UTC m=+144.344036845" watchObservedRunningTime="2026-02-18 19:36:20.868576664 +0000 UTC m=+144.450531529" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.870082 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.870331 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.370315793 +0000 UTC m=+144.952270638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.870479 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.870810 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.370799063 +0000 UTC m=+144.952753908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.873030 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.874990 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rmh4d"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.878812 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.882226 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jx49r"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.882838 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fgjll" podStartSLOduration=120.882826331 podStartE2EDuration="2m0.882826331s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.834399702 +0000 UTC m=+144.416354557" watchObservedRunningTime="2026-02-18 19:36:20.882826331 +0000 UTC m=+144.464781176" Feb 18 19:36:20 crc kubenswrapper[4932]: W0218 19:36:20.892645 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf2da7f7_2427_4099_ba40_855a7e850256.slice/crio-38bb13d0589766f79a59074443f18efa3f14ba1022c535a62ad5eca63fd5cb7e WatchSource:0}: Error finding container 38bb13d0589766f79a59074443f18efa3f14ba1022c535a62ad5eca63fd5cb7e: Status 404 returned error can't find the container with id 38bb13d0589766f79a59074443f18efa3f14ba1022c535a62ad5eca63fd5cb7e Feb 18 19:36:20 crc kubenswrapper[4932]: W0218 19:36:20.892820 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74a8d999_1731_4a72_8ca8_25913744a8e7.slice/crio-6a8b444cb4f234126f415b9435292e94a5ece6b61cf6c10044b8bcb71a4e78c3 WatchSource:0}: Error finding container 6a8b444cb4f234126f415b9435292e94a5ece6b61cf6c10044b8bcb71a4e78c3: Status 404 returned error can't find the container with id 6a8b444cb4f234126f415b9435292e94a5ece6b61cf6c10044b8bcb71a4e78c3 Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.893115 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vqskh"] Feb 18 19:36:20 crc kubenswrapper[4932]: W0218 19:36:20.968266 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a167a2c_fdc1_4d22_83b7_f1a63ab147bc.slice/crio-ef51d72404138c0fc9dbd2aa144ab1924b6edc15d9d24221f507fe166cce6dfe WatchSource:0}: Error finding container ef51d72404138c0fc9dbd2aa144ab1924b6edc15d9d24221f507fe166cce6dfe: Status 404 returned error can't find the container with id ef51d72404138c0fc9dbd2aa144ab1924b6edc15d9d24221f507fe166cce6dfe Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.970912 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.971286 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.471271113 +0000 UTC m=+145.053225958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.992041 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" podStartSLOduration=120.992022115 podStartE2EDuration="2m0.992022115s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.936699592 +0000 UTC m=+144.518654437" watchObservedRunningTime="2026-02-18 19:36:20.992022115 +0000 UTC m=+144.573976960" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.072452 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.072783 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.572773265 +0000 UTC m=+145.154728110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.078649 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" podStartSLOduration=121.078630606 podStartE2EDuration="2m1.078630606s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.993605961 +0000 UTC m=+144.575560806" watchObservedRunningTime="2026-02-18 19:36:21.078630606 +0000 UTC m=+144.660585451" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.079310 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" podStartSLOduration=121.079304121 podStartE2EDuration="2m1.079304121s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:21.070601387 +0000 UTC m=+144.652556232" watchObservedRunningTime="2026-02-18 19:36:21.079304121 +0000 UTC m=+144.661258966" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.117088 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" podStartSLOduration=121.117069953 podStartE2EDuration="2m1.117069953s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:21.10978394 +0000 UTC m=+144.691738785" watchObservedRunningTime="2026-02-18 19:36:21.117069953 +0000 UTC m=+144.699024798" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.117362 4932 csr.go:261] certificate signing request csr-lz5lj is approved, waiting to be issued Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.144066 4932 csr.go:257] certificate signing request csr-lz5lj is issued Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.174553 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.176874 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.676849215 +0000 UTC m=+145.258804060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.276387 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.276725 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.776713361 +0000 UTC m=+145.358668206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.345110 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" event={"ID":"c3710240-88d7-4611-bd77-6de0c54c1e3c","Type":"ContainerStarted","Data":"eb29a1953f8ef555b1645669da6a90d8f070cd48aeada6ce1b6c77371e4952fd"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.346660 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" event={"ID":"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15","Type":"ContainerStarted","Data":"51f002d9b6f9101211dae4ede1845c3983a19da3a4901e450b150f11021b724f"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.346687 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" event={"ID":"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15","Type":"ContainerStarted","Data":"e9b7f7db759fb788ae65c135e5d07ac1fe23f116dc3d863c923b98994c14a0f8"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.348065 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" event={"ID":"048e17bc-05bf-40e4-9f40-87d936fcf772","Type":"ContainerStarted","Data":"67de493045dcef40d7dcd7366beacc478832b3155bced8f9164fd20b4a4dc42d"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.348091 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" event={"ID":"048e17bc-05bf-40e4-9f40-87d936fcf772","Type":"ContainerStarted","Data":"9614a2e59be43910c023c657f9503a561993abaeac9a3f60c668134e54e7399b"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.349441 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" event={"ID":"81931b41-8917-4936-9e02-52f7c8c0f1c1","Type":"ContainerStarted","Data":"cc249f9154cfabcbe9700db247dc1b8fb0b1e7b7a0b0243940c7bdc27cf8f09e"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.349464 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" event={"ID":"81931b41-8917-4936-9e02-52f7c8c0f1c1","Type":"ContainerStarted","Data":"dc3bfd6e6b4590f4e880e27fa3a01f953cefdcfa61b4b478074f887d4e12d642"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.350103 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.352396 4932 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zjx26 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.352449 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" podUID="81931b41-8917-4936-9e02-52f7c8c0f1c1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.354925 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rmh4d" event={"ID":"bac9c1de-1cfe-48d3-aafc-ddb41647c661","Type":"ContainerStarted","Data":"dc2a9f0c56c83f0657d3cc25cdee9c7edcea7a640a07a72d2b8b85fb5fe4ce90"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.358814 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" event={"ID":"26869f13-c7ee-411c-85a1-72338142184c","Type":"ContainerStarted","Data":"bb24a0e12402eddcf6cdc16832787a05797da7fa87b6bb5b59cf4835c2fe80cf"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.358849 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" event={"ID":"26869f13-c7ee-411c-85a1-72338142184c","Type":"ContainerStarted","Data":"9ea8b26360f7cb4dcd765564e77bb5d9c92035fcbeedf0218763d1c916f7bc0d"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.362015 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" event={"ID":"2c6e703e-85e3-4d17-a946-c17e42c27985","Type":"ContainerStarted","Data":"c1b266c1734fe90b1c784949370b87311a57272f865ea05fcc0aa1d68d48c4e1"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.362522 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" podStartSLOduration=122.362504894 podStartE2EDuration="2m2.362504894s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:21.360793476 +0000 UTC m=+144.942748341" watchObservedRunningTime="2026-02-18 19:36:21.362504894 +0000 UTC m=+144.944459739" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.363925 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" event={"ID":"3f0021b0-4c6c-4085-9819-5c94471f320c","Type":"ContainerStarted","Data":"95f4f558a40a9cc2f0f1b1cbe2e98b450864ff1deded7a434e0390af6c580b32"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.364719 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-f874p" event={"ID":"d939dc03-30d6-4839-abd8-1d8d1bbf8cad","Type":"ContainerStarted","Data":"3eb6d2b9e329772d6d00fb4611281dcf058a09c4e0bfbdd08add4911a8bf4cda"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.365738 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" event={"ID":"aa8e769a-613b-40f2-9d07-b034d7871302","Type":"ContainerStarted","Data":"218b49c938d8e80777791011277f66c62ea4373c727337751615007c487feb8e"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.369713 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" event={"ID":"47777b7a-7599-4366-8e0f-a2ddf382e6ef","Type":"ContainerStarted","Data":"361b45f5dc274d18fba9f7f40ad8d679d6de6fc40c19a336e38eae5090e62165"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.371808 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" event={"ID":"998697c8-1e0d-46ae-b92f-ae8faf0faef5","Type":"ContainerStarted","Data":"4682983c599d9514be76c4f2cadaeac84cec527837fd00a542e7eb9ac6f4a200"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.371831 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" event={"ID":"998697c8-1e0d-46ae-b92f-ae8faf0faef5","Type":"ContainerStarted","Data":"9e3041f3e4758a990acc3d697f9d4013638759cd561e1b6028d938d4a7766d22"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.379741 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" event={"ID":"e39708f9-5d2d-4ed5-9243-7b71ef470ca7","Type":"ContainerStarted","Data":"784badddcd9797871fec35aacb4b375a077788de958864c50c207fa8ea3d3eb2"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.384617 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.384927 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.884912973 +0000 UTC m=+145.466867818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.402126 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" event={"ID":"74a8d999-1731-4a72-8ca8-25913744a8e7","Type":"ContainerStarted","Data":"6a8b444cb4f234126f415b9435292e94a5ece6b61cf6c10044b8bcb71a4e78c3"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.413888 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" event={"ID":"6fc8d511-a907-4f74-9a1c-e262d684b6a5","Type":"ContainerStarted","Data":"3b68aa7d64e8bea74e172246fbc3f26c4b6a13c28a7a757a8cb98e8a05ee2db2"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.413927 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" event={"ID":"6fc8d511-a907-4f74-9a1c-e262d684b6a5","Type":"ContainerStarted","Data":"782a8daff7e4075ef0ee5a57ed51d3d652ff0f58ded3e121cc09fa5730661d7d"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.418275 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" podStartSLOduration=121.418233026 podStartE2EDuration="2m1.418233026s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:21.382632572 +0000 UTC m=+144.964587417" watchObservedRunningTime="2026-02-18 19:36:21.418233026 +0000 UTC m=+145.000187881" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.423204 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" event={"ID":"4c04fd14-9dfc-4c0f-8125-8663eac51a45","Type":"ContainerStarted","Data":"1f865959b496ca27e6078451adc1f895bdaec83d0b23f9d162aff86190c8636e"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.430073 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" event={"ID":"93bf45fc-6447-479a-83d0-c9418ecb8270","Type":"ContainerStarted","Data":"fed08d834c2e43a93dae61be4d645f3cba6fa54723a82749001fc1ec74880172"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.431426 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" event={"ID":"df2da7f7-2427-4099-ba40-855a7e850256","Type":"ContainerStarted","Data":"38bb13d0589766f79a59074443f18efa3f14ba1022c535a62ad5eca63fd5cb7e"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.446261 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" event={"ID":"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc","Type":"ContainerStarted","Data":"ef51d72404138c0fc9dbd2aa144ab1924b6edc15d9d24221f507fe166cce6dfe"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.467557 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:21 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:21 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:21 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.467614 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.472559 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" event={"ID":"bd39f7e2-211c-4104-a72d-5374a6e95ee1","Type":"ContainerStarted","Data":"342a46aed7fdc89c8cc4637447f1816bf65305c6d7857a37f7abafd5f11db868"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.485777 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" event={"ID":"c7dff6ec-6703-40fb-a94a-c1d8b4641703","Type":"ContainerStarted","Data":"f7bee4b1562f1eb0407c829ab28a97f1f0477027b3ada5a4b0d4d3f4ee058b7c"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.486630 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" event={"ID":"c7dff6ec-6703-40fb-a94a-c1d8b4641703","Type":"ContainerStarted","Data":"76fd31eabde21f076f98a251187e981172fba61be374d89db62d5886076a1db3"} Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.486341 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.986328384 +0000 UTC m=+145.568283229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.486051 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.497726 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" event={"ID":"e6399c54-0b37-424f-8535-f8b0ab33ff52","Type":"ContainerStarted","Data":"040ae9ae49376814e888d658c1d4d2cc66be889dd2988a46d1c05ce4f26c22d8"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.512374 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nqdfv" event={"ID":"908b160b-0e48-4c2c-a35b-45fe25ca093f","Type":"ContainerStarted","Data":"b4dd05743a3c6419d689689c92d01af1f24a5caeb97cde2d086bbd92e07bdc79"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.513203 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" podStartSLOduration=122.513162412 podStartE2EDuration="2m2.513162412s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:21.512439586 +0000 UTC m=+145.094394431" watchObservedRunningTime="2026-02-18 19:36:21.513162412 +0000 UTC m=+145.095117267" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.513425 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" podStartSLOduration=121.513419808 podStartE2EDuration="2m1.513419808s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:21.443963299 +0000 UTC m=+145.025918144" watchObservedRunningTime="2026-02-18 19:36:21.513419808 +0000 UTC m=+145.095374653" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.529890 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" event={"ID":"6ed62cdb-a7e1-4366-88b7-7c2ed1102203","Type":"ContainerStarted","Data":"5d1d4fac97628abc1f8040c0c71c49080de76f1b2848ba9e3d4bd0b5514ed587"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.531568 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" event={"ID":"04a7bf0c-8c31-4401-b3db-4b5168a0cac7","Type":"ContainerStarted","Data":"fa9d4cf6a682f199c1c9e21d57871bface75544a08f5ac988425f8a339027b54"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.539764 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" event={"ID":"9a7e80fe-b260-461e-a11b-633a14eb304d","Type":"ContainerStarted","Data":"e0a441562e2c04b159e2ade7b33d1f20c3cdf4ecdc2d56b221d9d37982ed6960"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.540242 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.546040 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" event={"ID":"7a63a8af-95ca-447b-9bfa-7aec1033c0b3","Type":"ContainerStarted","Data":"30225872b0a67aafde07dba9d04b10254757981032e6b7fedb413d3e3b48efc3"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.553693 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" event={"ID":"715b331b-b140-461c-9a06-ba6ede3af8b6","Type":"ContainerStarted","Data":"566b710c5e0072c00efe43c1ee7247f757081caf10fecd43ce4e87d08e80bd49"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.560430 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" podStartSLOduration=121.560410915 podStartE2EDuration="2m1.560410915s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:21.558963893 +0000 UTC m=+145.140918738" watchObservedRunningTime="2026-02-18 19:36:21.560410915 +0000 UTC m=+145.142365760" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.563949 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.608478 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.608690 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.108669011 +0000 UTC m=+145.690623856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.609110 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.610258 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.110249626 +0000 UTC m=+145.692204471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.616526 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.716263 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.716467 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.216443793 +0000 UTC m=+145.798398638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.716694 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.717829 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.217820534 +0000 UTC m=+145.799775369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.820013 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.820238 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.320211516 +0000 UTC m=+145.902166361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.820361 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.820647 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.320634966 +0000 UTC m=+145.902589811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.921420 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.922071 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.422055426 +0000 UTC m=+146.004010271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.029753 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.030095 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.530084334 +0000 UTC m=+146.112039169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.130415 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.130562 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.630536974 +0000 UTC m=+146.212491819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.130658 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.131059 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.631051415 +0000 UTC m=+146.213006250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.148954 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-18 19:31:21 +0000 UTC, rotation deadline is 2026-11-14 21:08:45.321195999 +0000 UTC Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.149031 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6457h32m23.172167993s for next certificate rotation Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.235130 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.235696 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.735683407 +0000 UTC m=+146.317638252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.336870 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.337447 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.837408325 +0000 UTC m=+146.419363170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.437611 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.437801 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.937751652 +0000 UTC m=+146.519706507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.438069 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.438551 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.938539899 +0000 UTC m=+146.520494814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.467211 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:22 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:22 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:22 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.467286 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.539725 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.540342 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.040305948 +0000 UTC m=+146.622260793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.588992 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" event={"ID":"93bf45fc-6447-479a-83d0-c9418ecb8270","Type":"ContainerStarted","Data":"61b04be33b6c62292c94d7c5d311f781da7ec75322ae958cd3d09e1ea6f8a896"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.592505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" event={"ID":"3f0021b0-4c6c-4085-9819-5c94471f320c","Type":"ContainerStarted","Data":"2a9df8a7e9e95a5414f9e00f71069ae0342eda33bd0e7c6f34bd30c564108d6a"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.622650 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" event={"ID":"e39708f9-5d2d-4ed5-9243-7b71ef470ca7","Type":"ContainerStarted","Data":"e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.623441 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.626339 4932 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5c79p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.626383 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.627547 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-f874p" event={"ID":"d939dc03-30d6-4839-abd8-1d8d1bbf8cad","Type":"ContainerStarted","Data":"45f95ff2ddc808c00dbaaefab86b9d86b57309866917df8479462017de9d89d5"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.628300 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.633550 4932 patch_prober.go:28] interesting pod/console-operator-58897d9998-f874p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.633619 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-f874p" podUID="d939dc03-30d6-4839-abd8-1d8d1bbf8cad" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.641571 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" event={"ID":"715b331b-b140-461c-9a06-ba6ede3af8b6","Type":"ContainerStarted","Data":"1d0f4e8bd197ca7adeff5e95b9e9c7dc4e55188347e271ee2ac7c783245e466a"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.642497 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.647019 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.652723 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.152702833 +0000 UTC m=+146.734657888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.661599 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" event={"ID":"df2da7f7-2427-4099-ba40-855a7e850256","Type":"ContainerStarted","Data":"b5ed4a6f0bf4f91923dd049718a94650dc99ee1ca654ed90bdaf1097425ae369"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.683096 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" podStartSLOduration=122.6830812 podStartE2EDuration="2m2.6830812s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:22.682360434 +0000 UTC m=+146.264315279" watchObservedRunningTime="2026-02-18 19:36:22.6830812 +0000 UTC m=+146.265036035" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.687692 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" event={"ID":"6fc8d511-a907-4f74-9a1c-e262d684b6a5","Type":"ContainerStarted","Data":"ba6f8d423ec6421c3748fcba26abc40208902613512ac682c5392e2d4b80bdc5"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.692035 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" event={"ID":"bd39f7e2-211c-4104-a72d-5374a6e95ee1","Type":"ContainerStarted","Data":"537fb16ce4adcd65df62cfc67a3162f784d1e5eeb00f209b9e035773d80fa2de"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.692737 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.694286 4932 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pkfx8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.694325 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" podUID="bd39f7e2-211c-4104-a72d-5374a6e95ee1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.717057 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" event={"ID":"04a7bf0c-8c31-4401-b3db-4b5168a0cac7","Type":"ContainerStarted","Data":"bead4ca36a615a1197bb9f0125682cf01e16d8e7b0a5a7f38b0b82457f5e8d12"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.720016 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rmh4d" event={"ID":"bac9c1de-1cfe-48d3-aafc-ddb41647c661","Type":"ContainerStarted","Data":"806cc07457ba48e38dd072d8b5b28ea7e4c94f3269892cf94e3533b1bdcd1df6"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.726881 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" event={"ID":"2c6e703e-85e3-4d17-a946-c17e42c27985","Type":"ContainerStarted","Data":"628192a5e10ee337c3fcef88b9c0176582782214135afb5ec2296d7f037fc31a"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.734633 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" podStartSLOduration=122.734611859 podStartE2EDuration="2m2.734611859s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:22.728471282 +0000 UTC m=+146.310426117" watchObservedRunningTime="2026-02-18 19:36:22.734611859 +0000 UTC m=+146.316566714" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.751823 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.752911 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.252872746 +0000 UTC m=+146.834827771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.775137 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" event={"ID":"7a63a8af-95ca-447b-9bfa-7aec1033c0b3","Type":"ContainerStarted","Data":"6cdd4a2a2b9d9a810ebbcaa2625c227df643198b4946b058e045775f4a325219"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.786088 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" event={"ID":"e6399c54-0b37-424f-8535-f8b0ab33ff52","Type":"ContainerStarted","Data":"18e27fcaaceb83a927ce3fb5d48077583f5d03934b1a64a091ddf4e2953de5b8"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.795567 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" event={"ID":"6ed62cdb-a7e1-4366-88b7-7c2ed1102203","Type":"ContainerStarted","Data":"569a2d52f045f1d43aee0b414f30b30c374af7bf8028870cbef06575176d5248"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.795613 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" event={"ID":"6ed62cdb-a7e1-4366-88b7-7c2ed1102203","Type":"ContainerStarted","Data":"5367c9a85ef40742206bc445b261d6f2fb61ef5fb4db699d03b0ec6258478593"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.817197 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nqdfv" event={"ID":"908b160b-0e48-4c2c-a35b-45fe25ca093f","Type":"ContainerStarted","Data":"43d9d1395d43a6f0356745ba024182eac1b823dd5fe9b5bfb1ee28fb5b2c5216"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.849510 4932 generic.go:334] "Generic (PLEG): container finished" podID="26869f13-c7ee-411c-85a1-72338142184c" containerID="bb24a0e12402eddcf6cdc16832787a05797da7fa87b6bb5b59cf4835c2fe80cf" exitCode=0 Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.849578 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" event={"ID":"26869f13-c7ee-411c-85a1-72338142184c","Type":"ContainerDied","Data":"bb24a0e12402eddcf6cdc16832787a05797da7fa87b6bb5b59cf4835c2fe80cf"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.857153 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" event={"ID":"c3710240-88d7-4611-bd77-6de0c54c1e3c","Type":"ContainerStarted","Data":"865186d87f3bf8f8f99a7045b440b20d954cf72d2407262032a86db62f0a1877"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.859016 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.861063 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.361051347 +0000 UTC m=+146.943006182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.867556 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" event={"ID":"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15","Type":"ContainerStarted","Data":"f6950a1501fe5ccdeb0870ba190fb5d884a9323c0e1da591b8925c6c9048b929"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.868159 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.870083 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" event={"ID":"4c04fd14-9dfc-4c0f-8125-8663eac51a45","Type":"ContainerStarted","Data":"32eab7f72bd3d49fd5b532bc227959e31e4fff903969e4b5c95bc612859ef888"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.905060 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.917686 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" podStartSLOduration=122.917668239 podStartE2EDuration="2m2.917668239s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:22.831868107 +0000 UTC m=+146.413822952" watchObservedRunningTime="2026-02-18 19:36:22.917668239 +0000 UTC m=+146.499623084" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.918148 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" podStartSLOduration=122.91814313 podStartE2EDuration="2m2.91814313s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:22.908890234 +0000 UTC m=+146.490845079" watchObservedRunningTime="2026-02-18 19:36:22.91814313 +0000 UTC m=+146.500097975" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.962795 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.963321 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.463306136 +0000 UTC m=+147.045260981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.022116 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-f874p" podStartSLOduration=123.022088106 podStartE2EDuration="2m3.022088106s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:22.973858631 +0000 UTC m=+146.555813476" watchObservedRunningTime="2026-02-18 19:36:23.022088106 +0000 UTC m=+146.604042951" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.023069 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" podStartSLOduration=123.023064668 podStartE2EDuration="2m3.023064668s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.016954121 +0000 UTC m=+146.598908986" watchObservedRunningTime="2026-02-18 19:36:23.023064668 +0000 UTC m=+146.605019513" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.049169 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rmh4d" podStartSLOduration=8.049150559 podStartE2EDuration="8.049150559s" podCreationTimestamp="2026-02-18 19:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.048628448 +0000 UTC m=+146.630583293" watchObservedRunningTime="2026-02-18 19:36:23.049150559 +0000 UTC m=+146.631105404" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.067469 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.070142 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.570126637 +0000 UTC m=+147.152081482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.093539 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" podStartSLOduration=123.093520688 podStartE2EDuration="2m3.093520688s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.092322212 +0000 UTC m=+146.674277057" watchObservedRunningTime="2026-02-18 19:36:23.093520688 +0000 UTC m=+146.675475533" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.116677 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" podStartSLOduration=123.116657014 podStartE2EDuration="2m3.116657014s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.116354977 +0000 UTC m=+146.698309822" watchObservedRunningTime="2026-02-18 19:36:23.116657014 +0000 UTC m=+146.698611859" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.168587 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" podStartSLOduration=123.168571091 podStartE2EDuration="2m3.168571091s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.165668536 +0000 UTC m=+146.747623381" watchObservedRunningTime="2026-02-18 19:36:23.168571091 +0000 UTC m=+146.750525946" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.170718 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.171131 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.671115608 +0000 UTC m=+147.253070443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.239541 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" podStartSLOduration=124.239525383 podStartE2EDuration="2m4.239525383s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.204532863 +0000 UTC m=+146.786487708" watchObservedRunningTime="2026-02-18 19:36:23.239525383 +0000 UTC m=+146.821480228" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.273245 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.273713 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.773700734 +0000 UTC m=+147.355655579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.276479 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" podStartSLOduration=123.276445876 podStartE2EDuration="2m3.276445876s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.240080365 +0000 UTC m=+146.822035210" watchObservedRunningTime="2026-02-18 19:36:23.276445876 +0000 UTC m=+146.858400721" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.311339 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.311430 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.317312 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" podStartSLOduration=123.317293016 podStartE2EDuration="2m3.317293016s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.316734744 +0000 UTC m=+146.898689589" watchObservedRunningTime="2026-02-18 19:36:23.317293016 +0000 UTC m=+146.899247861" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.317787 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" podStartSLOduration=123.317783127 podStartE2EDuration="2m3.317783127s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.298648821 +0000 UTC m=+146.880603676" watchObservedRunningTime="2026-02-18 19:36:23.317783127 +0000 UTC m=+146.899737972" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.351855 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.351897 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.353930 4932 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-z2jc5 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.353984 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" podUID="26869f13-c7ee-411c-85a1-72338142184c" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.382606 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.383012 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.882996001 +0000 UTC m=+147.464950846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.405826 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" podStartSLOduration=123.405805799 podStartE2EDuration="2m3.405805799s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.405321648 +0000 UTC m=+146.987276483" watchObservedRunningTime="2026-02-18 19:36:23.405805799 +0000 UTC m=+146.987760644" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.407560 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" podStartSLOduration=123.407555268 podStartE2EDuration="2m3.407555268s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.368742733 +0000 UTC m=+146.950697578" watchObservedRunningTime="2026-02-18 19:36:23.407555268 +0000 UTC m=+146.989510113" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.474787 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:23 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:23 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:23 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.474868 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.485736 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.486142 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.98612754 +0000 UTC m=+147.568082385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.587507 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.587674 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.087645192 +0000 UTC m=+147.669600037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.587935 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.588227 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.088219895 +0000 UTC m=+147.670174740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.642624 4932 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tcbfq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.642702 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" podUID="715b331b-b140-461c-9a06-ba6ede3af8b6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.688781 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.689002 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.188967081 +0000 UTC m=+147.770921956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.689134 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.689507 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.189497553 +0000 UTC m=+147.771452398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.790225 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.790426 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.290402832 +0000 UTC m=+147.872357677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.790605 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.790933 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.290921494 +0000 UTC m=+147.872876329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.876882 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" event={"ID":"26869f13-c7ee-411c-85a1-72338142184c","Type":"ContainerStarted","Data":"239ed198d59b96808be74e97039276c1810e62ae4de8443d043aa9f97be81653"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.879054 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" event={"ID":"aa8e769a-613b-40f2-9d07-b034d7871302","Type":"ContainerStarted","Data":"5ee0cb22ae47bde8e853512d61accb4cca285d661e453c10d04fa3d29d05a20c"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.879110 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" event={"ID":"aa8e769a-613b-40f2-9d07-b034d7871302","Type":"ContainerStarted","Data":"5d93c144fbca8fc0014bf5b89687731042631cdcf1fff74a250418795005bb47"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.881151 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nqdfv" event={"ID":"908b160b-0e48-4c2c-a35b-45fe25ca093f","Type":"ContainerStarted","Data":"a85b3d7f77355b2a045efa0c57cd1557dbb335545fa660a66e01202a29c65be7"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.881367 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.883360 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" event={"ID":"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc","Type":"ContainerStarted","Data":"e16d8a42d6545579a2b014e5f758a4bd7cbf5d469e32e2cc2ca89414315db8d2"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.883396 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" event={"ID":"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc","Type":"ContainerStarted","Data":"de5ce862f938b0ae728fa02d72d8c0d92a79e116ac85b6db385eb6ea4b2e5b3a"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.884675 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" event={"ID":"04a7bf0c-8c31-4401-b3db-4b5168a0cac7","Type":"ContainerStarted","Data":"44b7c1eb8596dbf466c104e3e4d456eee051d52c8cffdfae98e624914a22a157"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.886671 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" event={"ID":"998697c8-1e0d-46ae-b92f-ae8faf0faef5","Type":"ContainerStarted","Data":"55c79ab5b58370b271c3ff0ecc683b29415a3303fb48cee7fe133d9f4bd56ee9"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.887785 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" event={"ID":"74a8d999-1731-4a72-8ca8-25913744a8e7","Type":"ContainerStarted","Data":"a2ab08dd02b277ccf9b1b5656451c3ddf4010c8961d88e2156ef4096cfcbfeae"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.888234 4932 patch_prober.go:28] interesting pod/console-operator-58897d9998-f874p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.888273 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-f874p" podUID="d939dc03-30d6-4839-abd8-1d8d1bbf8cad" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.888657 4932 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pkfx8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.888709 4932 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5c79p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.888718 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" podUID="bd39f7e2-211c-4104-a72d-5374a6e95ee1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.888741 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.891109 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.893756 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.393739866 +0000 UTC m=+147.975694711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.913078 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" podStartSLOduration=123.913061646 podStartE2EDuration="2m3.913061646s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.910615172 +0000 UTC m=+147.492570007" watchObservedRunningTime="2026-02-18 19:36:23.913061646 +0000 UTC m=+147.495016491" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.993662 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.003382 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.503367509 +0000 UTC m=+148.085322344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.023063 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" podStartSLOduration=124.023047698 podStartE2EDuration="2m4.023047698s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:24.022184169 +0000 UTC m=+147.604139014" watchObservedRunningTime="2026-02-18 19:36:24.023047698 +0000 UTC m=+147.605002543" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.023273 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" podStartSLOduration=124.023268833 podStartE2EDuration="2m4.023268833s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.963156053 +0000 UTC m=+147.545110888" watchObservedRunningTime="2026-02-18 19:36:24.023268833 +0000 UTC m=+147.605223678" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.065712 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" podStartSLOduration=124.065683198 podStartE2EDuration="2m4.065683198s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:24.063107321 +0000 UTC m=+147.645062166" watchObservedRunningTime="2026-02-18 19:36:24.065683198 +0000 UTC m=+147.647638043" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.097762 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.098867 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.598834757 +0000 UTC m=+148.180789612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.127877 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.149596 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nqdfv" podStartSLOduration=9.149570108 podStartE2EDuration="9.149570108s" podCreationTimestamp="2026-02-18 19:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:24.109513835 +0000 UTC m=+147.691468680" watchObservedRunningTime="2026-02-18 19:36:24.149570108 +0000 UTC m=+147.731524953" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.204413 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.204989 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.704973823 +0000 UTC m=+148.286928668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.229725 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" podStartSLOduration=124.229709225 podStartE2EDuration="2m4.229709225s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:24.151356538 +0000 UTC m=+147.733311383" watchObservedRunningTime="2026-02-18 19:36:24.229709225 +0000 UTC m=+147.811664070" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.305820 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.306072 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.806032616 +0000 UTC m=+148.387987461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.306147 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.306591 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.806580758 +0000 UTC m=+148.388535603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.400554 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.407028 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.407285 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.907247832 +0000 UTC m=+148.489202677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.407438 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.407753 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.907739043 +0000 UTC m=+148.489693888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.463314 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:24 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:24 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:24 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.463399 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.508385 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.508715 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.008674793 +0000 UTC m=+148.590629638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.609805 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.610156 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.110139495 +0000 UTC m=+148.692094340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.711484 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.711669 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.211641877 +0000 UTC m=+148.793596712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.711804 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.712133 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.212121898 +0000 UTC m=+148.794076743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.812624 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.812790 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.312765531 +0000 UTC m=+148.894720376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.812903 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.813215 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.313208001 +0000 UTC m=+148.895162846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.914386 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.914840 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.414825966 +0000 UTC m=+148.996780801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.915471 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" event={"ID":"e6399c54-0b37-424f-8535-f8b0ab33ff52","Type":"ContainerStarted","Data":"f26d727abe0fc9698fae8597feb6855bce1b709ba1153cca04bba8c6076eb619"} Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.915513 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" event={"ID":"e6399c54-0b37-424f-8535-f8b0ab33ff52","Type":"ContainerStarted","Data":"56fd0fac02496d5c2cab9a02b201698b7fe530f9859cdad374e4bc8433737306"} Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.936998 4932 generic.go:334] "Generic (PLEG): container finished" podID="048e17bc-05bf-40e4-9f40-87d936fcf772" containerID="67de493045dcef40d7dcd7366beacc478832b3155bced8f9164fd20b4a4dc42d" exitCode=0 Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.938095 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" event={"ID":"048e17bc-05bf-40e4-9f40-87d936fcf772","Type":"ContainerDied","Data":"67de493045dcef40d7dcd7366beacc478832b3155bced8f9164fd20b4a4dc42d"} Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.953319 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.015872 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.016075 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.016253 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.016349 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.016484 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.019493 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.519476499 +0000 UTC m=+149.101431344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.027837 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.031168 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.064345 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.069793 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.111304 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.116472 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.116957 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.117323 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.61730861 +0000 UTC m=+149.199263455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.156434 4932 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jr49c container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]log ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]etcd ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/generic-apiserver-start-informers ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/max-in-flight-filter ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 18 19:36:25 crc kubenswrapper[4932]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 18 19:36:25 crc kubenswrapper[4932]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/project.openshift.io-projectcache ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 18 19:36:25 crc kubenswrapper[4932]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 18 19:36:25 crc kubenswrapper[4932]: livez check failed Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.156515 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" podUID="7a63a8af-95ca-447b-9bfa-7aec1033c0b3" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.219903 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.220215 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.720203523 +0000 UTC m=+149.302158368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.299858 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.321189 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.321541 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.821525872 +0000 UTC m=+149.403480707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.393354 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.426022 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.426538 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.926515002 +0000 UTC m=+149.508469847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.465650 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:25 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:25 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:25 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.465698 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.528578 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.528928 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.028910655 +0000 UTC m=+149.610865500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.610350 4932 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.629980 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.630325 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.130314855 +0000 UTC m=+149.712269700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.731667 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.731851 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.231824898 +0000 UTC m=+149.813779743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.732159 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.732476 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.232462182 +0000 UTC m=+149.814417027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: W0218 19:36:25.733021 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-ea99bcfee0ed69adde71a056c94f7c19c6aa541b23b145fc871416ba0eef77fb WatchSource:0}: Error finding container ea99bcfee0ed69adde71a056c94f7c19c6aa541b23b145fc871416ba0eef77fb: Status 404 returned error can't find the container with id ea99bcfee0ed69adde71a056c94f7c19c6aa541b23b145fc871416ba0eef77fb Feb 18 19:36:25 crc kubenswrapper[4932]: W0218 19:36:25.744220 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-c97b1629fbb9f6bf94aa9fd6d81fbb8855792d96aa4ba2de338a2a6637ac63d3 WatchSource:0}: Error finding container c97b1629fbb9f6bf94aa9fd6d81fbb8855792d96aa4ba2de338a2a6637ac63d3: Status 404 returned error can't find the container with id c97b1629fbb9f6bf94aa9fd6d81fbb8855792d96aa4ba2de338a2a6637ac63d3 Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.833723 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.833925 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.333899433 +0000 UTC m=+149.915854278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.834150 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.834451 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.334436115 +0000 UTC m=+149.916390960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.935361 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.935740 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.435724473 +0000 UTC m=+150.017679318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.943025 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8fbdfafb929c168722cdd1f70dda4e7d3cd8f76b642674f6d96f43c8c513012d"} Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.943068 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ea99bcfee0ed69adde71a056c94f7c19c6aa541b23b145fc871416ba0eef77fb"} Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.945014 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3298fa47b7c069a0ac00cfcb47c585b1b0f88dc04b22062b39cb968deb3779c7"} Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.945039 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1ead9a2e78d8535670a0692c6bf2d7d511128341670e4780c7705a52db0d44db"} Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.947293 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" event={"ID":"e6399c54-0b37-424f-8535-f8b0ab33ff52","Type":"ContainerStarted","Data":"ac52211488c7f4e4b473ddff013d43fd9f96b4d445d5d9074217ca74e851c3e5"} Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.949114 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"82404893f52cd3bf9e4aad0eef54ec40591fd2b76b9faed5bbc03a6a56c0763b"} Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.949139 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c97b1629fbb9f6bf94aa9fd6d81fbb8855792d96aa4ba2de338a2a6637ac63d3"} Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.949427 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.037077 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:26 crc kubenswrapper[4932]: E0218 19:36:26.040364 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.540351515 +0000 UTC m=+150.122306360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.138770 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:26 crc kubenswrapper[4932]: E0218 19:36:26.139155 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.639136597 +0000 UTC m=+150.221091442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.164761 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.178783 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" podStartSLOduration=11.17876644 podStartE2EDuration="11.17876644s" podCreationTimestamp="2026-02-18 19:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:26.030500985 +0000 UTC m=+149.612455830" watchObservedRunningTime="2026-02-18 19:36:26.17876644 +0000 UTC m=+149.760721295" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.240495 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-496qr\" (UniqueName: \"kubernetes.io/projected/048e17bc-05bf-40e4-9f40-87d936fcf772-kube-api-access-496qr\") pod \"048e17bc-05bf-40e4-9f40-87d936fcf772\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.240549 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/048e17bc-05bf-40e4-9f40-87d936fcf772-config-volume\") pod \"048e17bc-05bf-40e4-9f40-87d936fcf772\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.240629 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/048e17bc-05bf-40e4-9f40-87d936fcf772-secret-volume\") pod \"048e17bc-05bf-40e4-9f40-87d936fcf772\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.240909 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.241095 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/048e17bc-05bf-40e4-9f40-87d936fcf772-config-volume" (OuterVolumeSpecName: "config-volume") pod "048e17bc-05bf-40e4-9f40-87d936fcf772" (UID: "048e17bc-05bf-40e4-9f40-87d936fcf772"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4932]: E0218 19:36:26.241310 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.741295424 +0000 UTC m=+150.323250269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.245871 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/048e17bc-05bf-40e4-9f40-87d936fcf772-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "048e17bc-05bf-40e4-9f40-87d936fcf772" (UID: "048e17bc-05bf-40e4-9f40-87d936fcf772"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.245928 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/048e17bc-05bf-40e4-9f40-87d936fcf772-kube-api-access-496qr" (OuterVolumeSpecName: "kube-api-access-496qr") pod "048e17bc-05bf-40e4-9f40-87d936fcf772" (UID: "048e17bc-05bf-40e4-9f40-87d936fcf772"). InnerVolumeSpecName "kube-api-access-496qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.337459 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qvwc8"] Feb 18 19:36:26 crc kubenswrapper[4932]: E0218 19:36:26.337640 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="048e17bc-05bf-40e4-9f40-87d936fcf772" containerName="collect-profiles" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.337665 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="048e17bc-05bf-40e4-9f40-87d936fcf772" containerName="collect-profiles" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.337752 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="048e17bc-05bf-40e4-9f40-87d936fcf772" containerName="collect-profiles" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.338371 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.342937 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.343199 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:26 crc kubenswrapper[4932]: E0218 19:36:26.343310 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.843292628 +0000 UTC m=+150.425247473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.343498 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.343550 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-496qr\" (UniqueName: \"kubernetes.io/projected/048e17bc-05bf-40e4-9f40-87d936fcf772-kube-api-access-496qr\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.343566 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/048e17bc-05bf-40e4-9f40-87d936fcf772-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.343579 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/048e17bc-05bf-40e4-9f40-87d936fcf772-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4932]: E0218 19:36:26.343788 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.843781329 +0000 UTC m=+150.425736174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.351536 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qvwc8"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.393297 4932 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T19:36:25.61037117Z","Handler":null,"Name":""} Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.397958 4932 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.397995 4932 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.444651 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.444855 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr45c\" (UniqueName: \"kubernetes.io/projected/cafe1e82-ef19-4345-825e-cc9bf016b353-kube-api-access-sr45c\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.444893 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-catalog-content\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.444917 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-utilities\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.448133 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.466421 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:26 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:26 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:26 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.466492 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.532611 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j2xgw"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.533455 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.537871 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.546282 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr45c\" (UniqueName: \"kubernetes.io/projected/cafe1e82-ef19-4345-825e-cc9bf016b353-kube-api-access-sr45c\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.546345 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.546377 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-catalog-content\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.546411 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-utilities\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.547224 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-utilities\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.547644 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-catalog-content\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.549640 4932 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.549694 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.553403 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j2xgw"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.567388 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr45c\" (UniqueName: \"kubernetes.io/projected/cafe1e82-ef19-4345-825e-cc9bf016b353-kube-api-access-sr45c\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.584411 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.647593 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-catalog-content\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.647688 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-utilities\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.647710 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgm8v\" (UniqueName: \"kubernetes.io/projected/62bbf001-ce57-471f-ad28-1d892d0d30e9-kube-api-access-rgm8v\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.656481 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.745617 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gbkr8"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.748444 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.753842 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgm8v\" (UniqueName: \"kubernetes.io/projected/62bbf001-ce57-471f-ad28-1d892d0d30e9-kube-api-access-rgm8v\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.753938 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-catalog-content\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.754066 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-utilities\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.754820 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-utilities\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.755678 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-catalog-content\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.759013 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gbkr8"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.780664 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgm8v\" (UniqueName: \"kubernetes.io/projected/62bbf001-ce57-471f-ad28-1d892d0d30e9-kube-api-access-rgm8v\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.841085 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.849920 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.855653 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-utilities\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.855722 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-catalog-content\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.855755 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hghw\" (UniqueName: \"kubernetes.io/projected/29a4229b-f53b-4cd7-b81b-7fc2dfded045-kube-api-access-6hghw\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.877991 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qvwc8"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.938977 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p69tc"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.943035 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.959641 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hghw\" (UniqueName: \"kubernetes.io/projected/29a4229b-f53b-4cd7-b81b-7fc2dfded045-kube-api-access-6hghw\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.959752 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-utilities\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.959871 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-catalog-content\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.960508 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-catalog-content\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.960987 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-utilities\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.976355 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p69tc"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.987042 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hghw\" (UniqueName: \"kubernetes.io/projected/29a4229b-f53b-4cd7-b81b-7fc2dfded045-kube-api-access-6hghw\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.990009 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" event={"ID":"048e17bc-05bf-40e4-9f40-87d936fcf772","Type":"ContainerDied","Data":"9614a2e59be43910c023c657f9503a561993abaeac9a3f60c668134e54e7399b"} Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.990050 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9614a2e59be43910c023c657f9503a561993abaeac9a3f60c668134e54e7399b" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.990183 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.991275 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvwc8" event={"ID":"cafe1e82-ef19-4345-825e-cc9bf016b353","Type":"ContainerStarted","Data":"94c56c7588969970298ca76c9989e0d42da323b423ba2e42eec0825109130ea6"} Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.061318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-utilities\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.061360 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-catalog-content\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.061409 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvppw\" (UniqueName: \"kubernetes.io/projected/81ac7afd-2261-4af0-9b59-f18c98424c21-kube-api-access-vvppw\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.074930 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.075275 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlcbj"] Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.091024 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j2xgw"] Feb 18 19:36:27 crc kubenswrapper[4932]: W0218 19:36:27.103862 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62bbf001_ce57_471f_ad28_1d892d0d30e9.slice/crio-598a3819cd069f787a558e804a3b29d8f39ee54c7fd7148d56ad085f056a9d34 WatchSource:0}: Error finding container 598a3819cd069f787a558e804a3b29d8f39ee54c7fd7148d56ad085f056a9d34: Status 404 returned error can't find the container with id 598a3819cd069f787a558e804a3b29d8f39ee54c7fd7148d56ad085f056a9d34 Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.162284 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-utilities\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.162351 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-catalog-content\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.162894 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-utilities\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.162818 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-catalog-content\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.162939 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvppw\" (UniqueName: \"kubernetes.io/projected/81ac7afd-2261-4af0-9b59-f18c98424c21-kube-api-access-vvppw\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.186978 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvppw\" (UniqueName: \"kubernetes.io/projected/81ac7afd-2261-4af0-9b59-f18c98424c21-kube-api-access-vvppw\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.188134 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.276576 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gbkr8"] Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.278909 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: W0218 19:36:27.334646 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29a4229b_f53b_4cd7_b81b_7fc2dfded045.slice/crio-a2c366de25c0453f7a2db8d06c018b6056eb68e4c159566103c144c6b3b72029 WatchSource:0}: Error finding container a2c366de25c0453f7a2db8d06c018b6056eb68e4c159566103c144c6b3b72029: Status 404 returned error can't find the container with id a2c366de25c0453f7a2db8d06c018b6056eb68e4c159566103c144c6b3b72029 Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.461108 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p69tc"] Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.463696 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:27 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:27 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:27 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.463730 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:27 crc kubenswrapper[4932]: W0218 19:36:27.520971 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81ac7afd_2261_4af0_9b59_f18c98424c21.slice/crio-c390b6f5bfce7b21488ea351096dff0534a3fb41e4e604cf85b8536016c29379 WatchSource:0}: Error finding container c390b6f5bfce7b21488ea351096dff0534a3fb41e4e604cf85b8536016c29379: Status 404 returned error can't find the container with id c390b6f5bfce7b21488ea351096dff0534a3fb41e4e604cf85b8536016c29379 Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.606626 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.606696 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.622251 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-cn2nc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.622330 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cn2nc" podUID="d75d91b3-7800-4645-b272-768f9d02f81b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.622682 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-cn2nc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.622733 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cn2nc" podUID="d75d91b3-7800-4645-b272-768f9d02f81b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.996428 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" event={"ID":"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22","Type":"ContainerStarted","Data":"977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c"} Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.996834 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" event={"ID":"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22","Type":"ContainerStarted","Data":"e2cd6e9fe7b91c0ea246bc59cf9d11b75cc0eb7a103b52573fd6adf6936ac914"} Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.996866 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.998277 4932 generic.go:334] "Generic (PLEG): container finished" podID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerID="b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f" exitCode=0 Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.998354 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvwc8" event={"ID":"cafe1e82-ef19-4345-825e-cc9bf016b353","Type":"ContainerDied","Data":"b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f"} Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.000117 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.000435 4932 generic.go:334] "Generic (PLEG): container finished" podID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerID="6d3ff69895d4bcdcf15d410bfbcd335c0b79b07284d7d99d33d18f064ce3f033" exitCode=0 Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.000503 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2xgw" event={"ID":"62bbf001-ce57-471f-ad28-1d892d0d30e9","Type":"ContainerDied","Data":"6d3ff69895d4bcdcf15d410bfbcd335c0b79b07284d7d99d33d18f064ce3f033"} Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.000524 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2xgw" event={"ID":"62bbf001-ce57-471f-ad28-1d892d0d30e9","Type":"ContainerStarted","Data":"598a3819cd069f787a558e804a3b29d8f39ee54c7fd7148d56ad085f056a9d34"} Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.003154 4932 generic.go:334] "Generic (PLEG): container finished" podID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerID="1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547" exitCode=0 Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.003268 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p69tc" event={"ID":"81ac7afd-2261-4af0-9b59-f18c98424c21","Type":"ContainerDied","Data":"1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547"} Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.003327 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p69tc" event={"ID":"81ac7afd-2261-4af0-9b59-f18c98424c21","Type":"ContainerStarted","Data":"c390b6f5bfce7b21488ea351096dff0534a3fb41e4e604cf85b8536016c29379"} Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.005329 4932 generic.go:334] "Generic (PLEG): container finished" podID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerID="a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3" exitCode=0 Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.005373 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbkr8" event={"ID":"29a4229b-f53b-4cd7-b81b-7fc2dfded045","Type":"ContainerDied","Data":"a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3"} Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.005406 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbkr8" event={"ID":"29a4229b-f53b-4cd7-b81b-7fc2dfded045","Type":"ContainerStarted","Data":"a2c366de25c0453f7a2db8d06c018b6056eb68e4c159566103c144c6b3b72029"} Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.027166 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" podStartSLOduration=128.027143651 podStartE2EDuration="2m8.027143651s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:28.024375819 +0000 UTC m=+151.606330684" watchObservedRunningTime="2026-02-18 19:36:28.027143651 +0000 UTC m=+151.609098506" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.290486 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.290532 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.292620 4932 patch_prober.go:28] interesting pod/console-f9d7485db-fgjll container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.292723 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fgjll" podUID="f9f46b79-f300-42de-a2c3-a35670822a3b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.315225 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.323025 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.369103 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.397760 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.462124 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.465365 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:28 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:28 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:28 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.465408 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.536647 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4w2tj"] Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.537585 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.541725 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.560039 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w2tj"] Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.588225 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lttp\" (UniqueName: \"kubernetes.io/projected/b77a623a-ff2e-45aa-9004-b211b0200a3f-kube-api-access-7lttp\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.588279 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-catalog-content\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.588367 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-utilities\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.689489 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lttp\" (UniqueName: \"kubernetes.io/projected/b77a623a-ff2e-45aa-9004-b211b0200a3f-kube-api-access-7lttp\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.689538 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-catalog-content\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.689620 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-utilities\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.690091 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-utilities\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.690657 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-catalog-content\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.722201 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lttp\" (UniqueName: \"kubernetes.io/projected/b77a623a-ff2e-45aa-9004-b211b0200a3f-kube-api-access-7lttp\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.853582 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.857779 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.946443 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vwwjl"] Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.948512 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.953628 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwwjl"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.001430 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-utilities\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.001467 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-522zn\" (UniqueName: \"kubernetes.io/projected/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-kube-api-access-522zn\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.001663 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-catalog-content\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.102417 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-catalog-content\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.102510 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-522zn\" (UniqueName: \"kubernetes.io/projected/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-kube-api-access-522zn\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.102526 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-utilities\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.105969 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-catalog-content\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.108053 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-utilities\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.135704 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.136754 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.148565 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.150571 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-522zn\" (UniqueName: \"kubernetes.io/projected/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-kube-api-access-522zn\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.150581 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.150682 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.213070 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5202101c-f325-4956-a53c-f6b5663ad5cc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.213262 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5202101c-f325-4956-a53c-f6b5663ad5cc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.284419 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w2tj"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.293946 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.313607 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5202101c-f325-4956-a53c-f6b5663ad5cc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.313640 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5202101c-f325-4956-a53c-f6b5663ad5cc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.313735 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5202101c-f325-4956-a53c-f6b5663ad5cc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.338671 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5202101c-f325-4956-a53c-f6b5663ad5cc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.469814 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:29 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:29 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:29 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.469881 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.496877 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.538433 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-chh8j"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.540442 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.541919 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.600317 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chh8j"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.717401 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-utilities\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.717477 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-catalog-content\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.717511 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc64x\" (UniqueName: \"kubernetes.io/projected/ce921030-ec82-420d-a9e7-cd04ee7e055b-kube-api-access-fc64x\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.818674 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-catalog-content\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.818738 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc64x\" (UniqueName: \"kubernetes.io/projected/ce921030-ec82-420d-a9e7-cd04ee7e055b-kube-api-access-fc64x\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.818787 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-utilities\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.819268 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-utilities\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.858252 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-catalog-content\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.858040 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc64x\" (UniqueName: \"kubernetes.io/projected/ce921030-ec82-420d-a9e7-cd04ee7e055b-kube-api-access-fc64x\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.915590 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.936770 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-78d5s"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.938233 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.946110 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-78d5s"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.958626 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwwjl"] Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.029874 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w2tj" event={"ID":"b77a623a-ff2e-45aa-9004-b211b0200a3f","Type":"ContainerStarted","Data":"79bf00f2e14eaea6ac861e5d5414045b4e7af7c9494be58a0ddf97f7bbd0066e"} Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.031970 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwwjl" event={"ID":"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a","Type":"ContainerStarted","Data":"aa8524bb79cb00bc572889b14100dbb8df53c65222c30b9f755ec3035f0dbea0"} Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.059527 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.121644 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-utilities\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.121723 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-catalog-content\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.121778 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5ks5\" (UniqueName: \"kubernetes.io/projected/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-kube-api-access-h5ks5\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.224266 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-catalog-content\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.224814 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5ks5\" (UniqueName: \"kubernetes.io/projected/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-kube-api-access-h5ks5\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.224947 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-utilities\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.226903 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-utilities\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.227336 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-catalog-content\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.255035 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5ks5\" (UniqueName: \"kubernetes.io/projected/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-kube-api-access-h5ks5\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.265205 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.288774 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chh8j"] Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.468619 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:30 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:30 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:30 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.468688 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.659352 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-78d5s"] Feb 18 19:36:30 crc kubenswrapper[4932]: W0218 19:36:30.678729 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2483e7fb_5cc5_4715_8eea_fd5cf6b31d75.slice/crio-d4b4e12432d81a20c3a5774755df782409b7a4c04cd3667ffe8f283572befe4d WatchSource:0}: Error finding container d4b4e12432d81a20c3a5774755df782409b7a4c04cd3667ffe8f283572befe4d: Status 404 returned error can't find the container with id d4b4e12432d81a20c3a5774755df782409b7a4c04cd3667ffe8f283572befe4d Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.060384 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5202101c-f325-4956-a53c-f6b5663ad5cc","Type":"ContainerStarted","Data":"1e7d7bb277500c87441c43a9dbcbe843235a8108c8af31def1f0b1876f3703b9"} Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.064972 4932 generic.go:334] "Generic (PLEG): container finished" podID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerID="a6fd3575dcddfe36fd8dfcc8e6bcb0f7035ca23b01b700d078f298418c1896e8" exitCode=0 Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.065543 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w2tj" event={"ID":"b77a623a-ff2e-45aa-9004-b211b0200a3f","Type":"ContainerDied","Data":"a6fd3575dcddfe36fd8dfcc8e6bcb0f7035ca23b01b700d078f298418c1896e8"} Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.090535 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwwjl" event={"ID":"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a","Type":"ContainerStarted","Data":"edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb"} Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.098655 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chh8j" event={"ID":"ce921030-ec82-420d-a9e7-cd04ee7e055b","Type":"ContainerStarted","Data":"df158c2125177f92039a79a6401f4bb6f7b2c14373fe74c537b86d94e6f1ab0e"} Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.110271 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78d5s" event={"ID":"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75","Type":"ContainerStarted","Data":"d4b4e12432d81a20c3a5774755df782409b7a4c04cd3667ffe8f283572befe4d"} Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.466361 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:31 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:31 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:31 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.466582 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.125152 4932 generic.go:334] "Generic (PLEG): container finished" podID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerID="edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb" exitCode=0 Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.125210 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwwjl" event={"ID":"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a","Type":"ContainerDied","Data":"edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb"} Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.129143 4932 generic.go:334] "Generic (PLEG): container finished" podID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerID="bda935338a806285152d3571a5562901d0dc27851a41082e686230cc48a54915" exitCode=0 Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.129181 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chh8j" event={"ID":"ce921030-ec82-420d-a9e7-cd04ee7e055b","Type":"ContainerDied","Data":"bda935338a806285152d3571a5562901d0dc27851a41082e686230cc48a54915"} Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.134157 4932 generic.go:334] "Generic (PLEG): container finished" podID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerID="4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648" exitCode=0 Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.134274 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78d5s" event={"ID":"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75","Type":"ContainerDied","Data":"4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648"} Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.136391 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5202101c-f325-4956-a53c-f6b5663ad5cc","Type":"ContainerStarted","Data":"0ca1a22601dd73aec8e8f8c77febf4c85644399d7e326c0c391167e64c8df9c5"} Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.165552 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.165535597 podStartE2EDuration="3.165535597s" podCreationTimestamp="2026-02-18 19:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:32.162624792 +0000 UTC m=+155.744579637" watchObservedRunningTime="2026-02-18 19:36:32.165535597 +0000 UTC m=+155.747490442" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.487060 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:32 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:32 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:32 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.487148 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.579531 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.580215 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.584548 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.584699 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.586324 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbdc287c-8b65-4c46-8697-8af76f3cae17-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.586367 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbdc287c-8b65-4c46-8697-8af76f3cae17-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.588802 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.690044 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbdc287c-8b65-4c46-8697-8af76f3cae17-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.690124 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbdc287c-8b65-4c46-8697-8af76f3cae17-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.690249 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbdc287c-8b65-4c46-8697-8af76f3cae17-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.723024 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbdc287c-8b65-4c46-8697-8af76f3cae17-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.913480 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:33 crc kubenswrapper[4932]: I0218 19:36:33.161063 4932 generic.go:334] "Generic (PLEG): container finished" podID="5202101c-f325-4956-a53c-f6b5663ad5cc" containerID="0ca1a22601dd73aec8e8f8c77febf4c85644399d7e326c0c391167e64c8df9c5" exitCode=0 Feb 18 19:36:33 crc kubenswrapper[4932]: I0218 19:36:33.161100 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5202101c-f325-4956-a53c-f6b5663ad5cc","Type":"ContainerDied","Data":"0ca1a22601dd73aec8e8f8c77febf4c85644399d7e326c0c391167e64c8df9c5"} Feb 18 19:36:33 crc kubenswrapper[4932]: I0218 19:36:33.365115 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:36:33 crc kubenswrapper[4932]: I0218 19:36:33.464084 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:33 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:33 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:33 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:33 crc kubenswrapper[4932]: I0218 19:36:33.466590 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:33 crc kubenswrapper[4932]: I0218 19:36:33.893972 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:34 crc kubenswrapper[4932]: I0218 19:36:34.178611 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fbdc287c-8b65-4c46-8697-8af76f3cae17","Type":"ContainerStarted","Data":"e194023dbd4a48ad09c7343255eaa7721883c6d6ff199ee4dae4cf21de130a3d"} Feb 18 19:36:34 crc kubenswrapper[4932]: I0218 19:36:34.178679 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fbdc287c-8b65-4c46-8697-8af76f3cae17","Type":"ContainerStarted","Data":"cb8ac628b41bce3099321ef5f45cf31fa63dc85d80a600c0ec1b0aff786fa67b"} Feb 18 19:36:34 crc kubenswrapper[4932]: I0218 19:36:34.206017 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.2059984 podStartE2EDuration="2.2059984s" podCreationTimestamp="2026-02-18 19:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:34.204313662 +0000 UTC m=+157.786268517" watchObservedRunningTime="2026-02-18 19:36:34.2059984 +0000 UTC m=+157.787953245" Feb 18 19:36:34 crc kubenswrapper[4932]: I0218 19:36:34.464956 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:34 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:34 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:34 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:34 crc kubenswrapper[4932]: I0218 19:36:34.465019 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:35 crc kubenswrapper[4932]: I0218 19:36:35.192470 4932 generic.go:334] "Generic (PLEG): container finished" podID="fbdc287c-8b65-4c46-8697-8af76f3cae17" containerID="e194023dbd4a48ad09c7343255eaa7721883c6d6ff199ee4dae4cf21de130a3d" exitCode=0 Feb 18 19:36:35 crc kubenswrapper[4932]: I0218 19:36:35.201578 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fbdc287c-8b65-4c46-8697-8af76f3cae17","Type":"ContainerDied","Data":"e194023dbd4a48ad09c7343255eaa7721883c6d6ff199ee4dae4cf21de130a3d"} Feb 18 19:36:35 crc kubenswrapper[4932]: I0218 19:36:35.464296 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:35 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:35 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:35 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:35 crc kubenswrapper[4932]: I0218 19:36:35.464370 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:35 crc kubenswrapper[4932]: I0218 19:36:35.931421 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:36:36 crc kubenswrapper[4932]: I0218 19:36:36.464759 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:36 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:36 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:36 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:36 crc kubenswrapper[4932]: I0218 19:36:36.465050 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:37 crc kubenswrapper[4932]: I0218 19:36:37.493945 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:37 crc kubenswrapper[4932]: I0218 19:36:37.509393 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:37 crc kubenswrapper[4932]: I0218 19:36:37.621825 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-cn2nc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Feb 18 19:36:37 crc kubenswrapper[4932]: I0218 19:36:37.621882 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cn2nc" podUID="d75d91b3-7800-4645-b272-768f9d02f81b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Feb 18 19:36:37 crc kubenswrapper[4932]: I0218 19:36:37.622117 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-cn2nc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Feb 18 19:36:37 crc kubenswrapper[4932]: I0218 19:36:37.622323 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cn2nc" podUID="d75d91b3-7800-4645-b272-768f9d02f81b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Feb 18 19:36:38 crc kubenswrapper[4932]: I0218 19:36:38.291085 4932 patch_prober.go:28] interesting pod/console-f9d7485db-fgjll container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 18 19:36:38 crc kubenswrapper[4932]: I0218 19:36:38.291538 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fgjll" podUID="f9f46b79-f300-42de-a2c3-a35670822a3b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 18 19:36:39 crc kubenswrapper[4932]: I0218 19:36:39.773279 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gkgsj"] Feb 18 19:36:39 crc kubenswrapper[4932]: I0218 19:36:39.773536 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" podUID="18e44919-11c5-4974-9c71-ff803e668247" containerName="controller-manager" containerID="cri-o://92c4e7fb68e8f7dfb6986ed0cee4d733efb5ba5235fa8329b6cb5754629a9a84" gracePeriod=30 Feb 18 19:36:39 crc kubenswrapper[4932]: I0218 19:36:39.798395 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q"] Feb 18 19:36:39 crc kubenswrapper[4932]: I0218 19:36:39.798758 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" podUID="28fd23a7-1b44-440f-be4a-8c236cf8902b" containerName="route-controller-manager" containerID="cri-o://34ae58b97ea4a3420f81b7dbc9be8a4d3eb79fc358a2e2df9dc60f04b8d15203" gracePeriod=30 Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.222638 4932 generic.go:334] "Generic (PLEG): container finished" podID="28fd23a7-1b44-440f-be4a-8c236cf8902b" containerID="34ae58b97ea4a3420f81b7dbc9be8a4d3eb79fc358a2e2df9dc60f04b8d15203" exitCode=0 Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.222781 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" event={"ID":"28fd23a7-1b44-440f-be4a-8c236cf8902b","Type":"ContainerDied","Data":"34ae58b97ea4a3420f81b7dbc9be8a4d3eb79fc358a2e2df9dc60f04b8d15203"} Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.560034 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.565754 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714410 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5202101c-f325-4956-a53c-f6b5663ad5cc-kubelet-dir\") pod \"5202101c-f325-4956-a53c-f6b5663ad5cc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714484 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbdc287c-8b65-4c46-8697-8af76f3cae17-kube-api-access\") pod \"fbdc287c-8b65-4c46-8697-8af76f3cae17\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714508 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbdc287c-8b65-4c46-8697-8af76f3cae17-kubelet-dir\") pod \"fbdc287c-8b65-4c46-8697-8af76f3cae17\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714611 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5202101c-f325-4956-a53c-f6b5663ad5cc-kube-api-access\") pod \"5202101c-f325-4956-a53c-f6b5663ad5cc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714613 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5202101c-f325-4956-a53c-f6b5663ad5cc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5202101c-f325-4956-a53c-f6b5663ad5cc" (UID: "5202101c-f325-4956-a53c-f6b5663ad5cc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714709 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbdc287c-8b65-4c46-8697-8af76f3cae17-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fbdc287c-8b65-4c46-8697-8af76f3cae17" (UID: "fbdc287c-8b65-4c46-8697-8af76f3cae17"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714881 4932 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5202101c-f325-4956-a53c-f6b5663ad5cc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714902 4932 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbdc287c-8b65-4c46-8697-8af76f3cae17-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.720488 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbdc287c-8b65-4c46-8697-8af76f3cae17-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fbdc287c-8b65-4c46-8697-8af76f3cae17" (UID: "fbdc287c-8b65-4c46-8697-8af76f3cae17"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.720795 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5202101c-f325-4956-a53c-f6b5663ad5cc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5202101c-f325-4956-a53c-f6b5663ad5cc" (UID: "5202101c-f325-4956-a53c-f6b5663ad5cc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.816665 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbdc287c-8b65-4c46-8697-8af76f3cae17-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.816701 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5202101c-f325-4956-a53c-f6b5663ad5cc-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.243241 4932 generic.go:334] "Generic (PLEG): container finished" podID="18e44919-11c5-4974-9c71-ff803e668247" containerID="92c4e7fb68e8f7dfb6986ed0cee4d733efb5ba5235fa8329b6cb5754629a9a84" exitCode=0 Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.243523 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" event={"ID":"18e44919-11c5-4974-9c71-ff803e668247","Type":"ContainerDied","Data":"92c4e7fb68e8f7dfb6986ed0cee4d733efb5ba5235fa8329b6cb5754629a9a84"} Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.247854 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.247860 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fbdc287c-8b65-4c46-8697-8af76f3cae17","Type":"ContainerDied","Data":"cb8ac628b41bce3099321ef5f45cf31fa63dc85d80a600c0ec1b0aff786fa67b"} Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.247896 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb8ac628b41bce3099321ef5f45cf31fa63dc85d80a600c0ec1b0aff786fa67b" Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.251886 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5202101c-f325-4956-a53c-f6b5663ad5cc","Type":"ContainerDied","Data":"1e7d7bb277500c87441c43a9dbcbe843235a8108c8af31def1f0b1876f3703b9"} Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.251929 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e7d7bb277500c87441c43a9dbcbe843235a8108c8af31def1f0b1876f3703b9" Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.252013 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:43 crc kubenswrapper[4932]: I0218 19:36:43.256724 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:43 crc kubenswrapper[4932]: I0218 19:36:43.261429 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:43 crc kubenswrapper[4932]: I0218 19:36:43.398951 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.162905 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.168165 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.184967 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-client-ca\") pod \"28fd23a7-1b44-440f-be4a-8c236cf8902b\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185089 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-config\") pod \"18e44919-11c5-4974-9c71-ff803e668247\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185154 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-proxy-ca-bundles\") pod \"18e44919-11c5-4974-9c71-ff803e668247\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185271 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mg62\" (UniqueName: \"kubernetes.io/projected/18e44919-11c5-4974-9c71-ff803e668247-kube-api-access-7mg62\") pod \"18e44919-11c5-4974-9c71-ff803e668247\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185370 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18e44919-11c5-4974-9c71-ff803e668247-serving-cert\") pod \"18e44919-11c5-4974-9c71-ff803e668247\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185501 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2lqf\" (UniqueName: \"kubernetes.io/projected/28fd23a7-1b44-440f-be4a-8c236cf8902b-kube-api-access-b2lqf\") pod \"28fd23a7-1b44-440f-be4a-8c236cf8902b\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185607 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd23a7-1b44-440f-be4a-8c236cf8902b-serving-cert\") pod \"28fd23a7-1b44-440f-be4a-8c236cf8902b\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185656 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-client-ca\") pod \"18e44919-11c5-4974-9c71-ff803e668247\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185716 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-config\") pod \"28fd23a7-1b44-440f-be4a-8c236cf8902b\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.187787 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "18e44919-11c5-4974-9c71-ff803e668247" (UID: "18e44919-11c5-4974-9c71-ff803e668247"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.187799 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-client-ca" (OuterVolumeSpecName: "client-ca") pod "18e44919-11c5-4974-9c71-ff803e668247" (UID: "18e44919-11c5-4974-9c71-ff803e668247"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.188013 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-client-ca" (OuterVolumeSpecName: "client-ca") pod "28fd23a7-1b44-440f-be4a-8c236cf8902b" (UID: "28fd23a7-1b44-440f-be4a-8c236cf8902b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.189455 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-config" (OuterVolumeSpecName: "config") pod "28fd23a7-1b44-440f-be4a-8c236cf8902b" (UID: "28fd23a7-1b44-440f-be4a-8c236cf8902b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.191396 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e44919-11c5-4974-9c71-ff803e668247-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "18e44919-11c5-4974-9c71-ff803e668247" (UID: "18e44919-11c5-4974-9c71-ff803e668247"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.194490 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-config" (OuterVolumeSpecName: "config") pod "18e44919-11c5-4974-9c71-ff803e668247" (UID: "18e44919-11c5-4974-9c71-ff803e668247"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.194605 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e44919-11c5-4974-9c71-ff803e668247-kube-api-access-7mg62" (OuterVolumeSpecName: "kube-api-access-7mg62") pod "18e44919-11c5-4974-9c71-ff803e668247" (UID: "18e44919-11c5-4974-9c71-ff803e668247"). InnerVolumeSpecName "kube-api-access-7mg62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.197034 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28fd23a7-1b44-440f-be4a-8c236cf8902b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "28fd23a7-1b44-440f-be4a-8c236cf8902b" (UID: "28fd23a7-1b44-440f-be4a-8c236cf8902b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.198223 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28fd23a7-1b44-440f-be4a-8c236cf8902b-kube-api-access-b2lqf" (OuterVolumeSpecName: "kube-api-access-b2lqf") pod "28fd23a7-1b44-440f-be4a-8c236cf8902b" (UID: "28fd23a7-1b44-440f-be4a-8c236cf8902b"). InnerVolumeSpecName "kube-api-access-b2lqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200549 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc"] Feb 18 19:36:45 crc kubenswrapper[4932]: E0218 19:36:45.200726 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e44919-11c5-4974-9c71-ff803e668247" containerName="controller-manager" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200737 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e44919-11c5-4974-9c71-ff803e668247" containerName="controller-manager" Feb 18 19:36:45 crc kubenswrapper[4932]: E0218 19:36:45.200749 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5202101c-f325-4956-a53c-f6b5663ad5cc" containerName="pruner" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200756 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5202101c-f325-4956-a53c-f6b5663ad5cc" containerName="pruner" Feb 18 19:36:45 crc kubenswrapper[4932]: E0218 19:36:45.200765 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbdc287c-8b65-4c46-8697-8af76f3cae17" containerName="pruner" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200770 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbdc287c-8b65-4c46-8697-8af76f3cae17" containerName="pruner" Feb 18 19:36:45 crc kubenswrapper[4932]: E0218 19:36:45.200783 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28fd23a7-1b44-440f-be4a-8c236cf8902b" containerName="route-controller-manager" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200788 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="28fd23a7-1b44-440f-be4a-8c236cf8902b" containerName="route-controller-manager" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200864 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="28fd23a7-1b44-440f-be4a-8c236cf8902b" containerName="route-controller-manager" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200875 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbdc287c-8b65-4c46-8697-8af76f3cae17" containerName="pruner" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200882 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e44919-11c5-4974-9c71-ff803e668247" containerName="controller-manager" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200894 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="5202101c-f325-4956-a53c-f6b5663ad5cc" containerName="pruner" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.201243 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.207101 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc"] Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.274502 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.275101 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" event={"ID":"18e44919-11c5-4974-9c71-ff803e668247","Type":"ContainerDied","Data":"aa724fc4a2394799ac8478df313683148bbb44ac563a7fa7a5bf6e498abd0bc7"} Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.275158 4932 scope.go:117] "RemoveContainer" containerID="92c4e7fb68e8f7dfb6986ed0cee4d733efb5ba5235fa8329b6cb5754629a9a84" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.277393 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" event={"ID":"28fd23a7-1b44-440f-be4a-8c236cf8902b","Type":"ContainerDied","Data":"7444c4d3cedc79cae24f1e017b9fa1b3385d64a4dc475008ab7f7a213fdab561"} Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.277461 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.289995 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xsb\" (UniqueName: \"kubernetes.io/projected/195f4a7f-a008-4ca1-96d4-771758b838b9-kube-api-access-49xsb\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292050 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-proxy-ca-bundles\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292114 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/195f4a7f-a008-4ca1-96d4-771758b838b9-serving-cert\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292156 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-config\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292219 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-client-ca\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292292 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd23a7-1b44-440f-be4a-8c236cf8902b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292306 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292318 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292329 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292343 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292353 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292364 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mg62\" (UniqueName: \"kubernetes.io/projected/18e44919-11c5-4974-9c71-ff803e668247-kube-api-access-7mg62\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292373 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18e44919-11c5-4974-9c71-ff803e668247-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292386 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2lqf\" (UniqueName: \"kubernetes.io/projected/28fd23a7-1b44-440f-be4a-8c236cf8902b-kube-api-access-b2lqf\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.310882 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gkgsj"] Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.314748 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gkgsj"] Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.318093 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q"] Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.321156 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q"] Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.393625 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-proxy-ca-bundles\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.393677 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/195f4a7f-a008-4ca1-96d4-771758b838b9-serving-cert\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.393709 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-config\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.393729 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-client-ca\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.393749 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49xsb\" (UniqueName: \"kubernetes.io/projected/195f4a7f-a008-4ca1-96d4-771758b838b9-kube-api-access-49xsb\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.394818 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-proxy-ca-bundles\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.394898 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-client-ca\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.395470 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-config\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.398142 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/195f4a7f-a008-4ca1-96d4-771758b838b9-serving-cert\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.411334 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49xsb\" (UniqueName: \"kubernetes.io/projected/195f4a7f-a008-4ca1-96d4-771758b838b9-kube-api-access-49xsb\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.584760 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:46 crc kubenswrapper[4932]: I0218 19:36:46.850293 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:47 crc kubenswrapper[4932]: I0218 19:36:47.190000 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18e44919-11c5-4974-9c71-ff803e668247" path="/var/lib/kubelet/pods/18e44919-11c5-4974-9c71-ff803e668247/volumes" Feb 18 19:36:47 crc kubenswrapper[4932]: I0218 19:36:47.190722 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28fd23a7-1b44-440f-be4a-8c236cf8902b" path="/var/lib/kubelet/pods/28fd23a7-1b44-440f-be4a-8c236cf8902b/volumes" Feb 18 19:36:47 crc kubenswrapper[4932]: I0218 19:36:47.627525 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-cn2nc" Feb 18 19:36:48 crc kubenswrapper[4932]: I0218 19:36:48.294318 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:48 crc kubenswrapper[4932]: I0218 19:36:48.298782 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.623615 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw"] Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.624464 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.627374 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.627420 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.627612 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.627391 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.627750 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.627856 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.628098 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw"] Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.752252 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-config\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.752336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b627\" (UniqueName: \"kubernetes.io/projected/c823406a-c4f6-4335-be43-312c5336c730-kube-api-access-9b627\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.752377 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c823406a-c4f6-4335-be43-312c5336c730-serving-cert\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.752420 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-client-ca\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.853908 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c823406a-c4f6-4335-be43-312c5336c730-serving-cert\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.854068 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-client-ca\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.855500 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-client-ca\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.855575 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-config\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.855610 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b627\" (UniqueName: \"kubernetes.io/projected/c823406a-c4f6-4335-be43-312c5336c730-kube-api-access-9b627\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.857301 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-config\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.862126 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c823406a-c4f6-4335-be43-312c5336c730-serving-cert\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.874025 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b627\" (UniqueName: \"kubernetes.io/projected/c823406a-c4f6-4335-be43-312c5336c730-kube-api-access-9b627\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.949330 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:53 crc kubenswrapper[4932]: E0218 19:36:53.109517 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 19:36:53 crc kubenswrapper[4932]: E0218 19:36:53.109710 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgm8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-j2xgw_openshift-marketplace(62bbf001-ce57-471f-ad28-1d892d0d30e9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:36:53 crc kubenswrapper[4932]: E0218 19:36:53.110869 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-j2xgw" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" Feb 18 19:36:55 crc kubenswrapper[4932]: I0218 19:36:55.125102 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:57 crc kubenswrapper[4932]: I0218 19:36:57.606814 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:36:57 crc kubenswrapper[4932]: I0218 19:36:57.606932 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:36:58 crc kubenswrapper[4932]: I0218 19:36:58.600150 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:59 crc kubenswrapper[4932]: I0218 19:36:59.754892 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc"] Feb 18 19:36:59 crc kubenswrapper[4932]: I0218 19:36:59.836301 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw"] Feb 18 19:37:02 crc kubenswrapper[4932]: E0218 19:37:02.233218 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-j2xgw" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" Feb 18 19:37:04 crc kubenswrapper[4932]: E0218 19:37:04.674588 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 19:37:04 crc kubenswrapper[4932]: E0218 19:37:04.674869 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hghw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gbkr8_openshift-marketplace(29a4229b-f53b-4cd7-b81b-7fc2dfded045): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:37:04 crc kubenswrapper[4932]: E0218 19:37:04.676213 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gbkr8" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" Feb 18 19:37:04 crc kubenswrapper[4932]: E0218 19:37:04.706523 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 19:37:04 crc kubenswrapper[4932]: E0218 19:37:04.706761 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sr45c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qvwc8_openshift-marketplace(cafe1e82-ef19-4345-825e-cc9bf016b353): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:37:04 crc kubenswrapper[4932]: E0218 19:37:04.707961 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qvwc8" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.781621 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.782682 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.785186 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.785279 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.793089 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.793141 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.893894 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.893948 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.894019 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.912010 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:07 crc kubenswrapper[4932]: I0218 19:37:07.098999 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:07 crc kubenswrapper[4932]: I0218 19:37:07.889531 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:37:07 crc kubenswrapper[4932]: E0218 19:37:07.944197 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qvwc8" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" Feb 18 19:37:07 crc kubenswrapper[4932]: E0218 19:37:07.944257 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gbkr8" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.197793 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.198127 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lttp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4w2tj_openshift-marketplace(b77a623a-ff2e-45aa-9004-b211b0200a3f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.199230 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4w2tj" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.199944 4932 scope.go:117] "RemoveContainer" containerID="34ae58b97ea4a3420f81b7dbc9be8a4d3eb79fc358a2e2df9dc60f04b8d15203" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.203800 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.203930 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-522zn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vwwjl_openshift-marketplace(83fa5ba7-c2d8-4d68-839f-ba2f4cad568a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.205282 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vwwjl" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.226768 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.226922 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5ks5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-78d5s_openshift-marketplace(2483e7fb-5cc5-4715-8eea-fd5cf6b31d75): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.228371 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-78d5s" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.442651 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc"] Feb 18 19:37:09 crc kubenswrapper[4932]: W0218 19:37:09.450222 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod195f4a7f_a008_4ca1_96d4_771758b838b9.slice/crio-2fe701e682243488073f339c700bc104b7e5d361a8ce35793b4cf53c16c9c88b WatchSource:0}: Error finding container 2fe701e682243488073f339c700bc104b7e5d361a8ce35793b4cf53c16c9c88b: Status 404 returned error can't find the container with id 2fe701e682243488073f339c700bc104b7e5d361a8ce35793b4cf53c16c9c88b Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.478394 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw"] Feb 18 19:37:09 crc kubenswrapper[4932]: W0218 19:37:09.489000 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc823406a_c4f6_4335_be43_312c5336c730.slice/crio-f39ddfc2e987d91c204f3ba288daac538653177cbb76acaa1de063014d9d24bf WatchSource:0}: Error finding container f39ddfc2e987d91c204f3ba288daac538653177cbb76acaa1de063014d9d24bf: Status 404 returned error can't find the container with id f39ddfc2e987d91c204f3ba288daac538653177cbb76acaa1de063014d9d24bf Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.723326 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kdjbt"] Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.747543 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:37:09 crc kubenswrapper[4932]: W0218 19:37:09.755091 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddba97173_1fe4_4a77_acd6_ec71b7aea5b3.slice/crio-feb4a7d377ea26a35159f7d75865c2832616f9d3cb5a86adfb836dad5ee65129 WatchSource:0}: Error finding container feb4a7d377ea26a35159f7d75865c2832616f9d3cb5a86adfb836dad5ee65129: Status 404 returned error can't find the container with id feb4a7d377ea26a35159f7d75865c2832616f9d3cb5a86adfb836dad5ee65129 Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.899926 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chh8j" event={"ID":"ce921030-ec82-420d-a9e7-cd04ee7e055b","Type":"ContainerStarted","Data":"fb361fdaea379654dbc86cd68517d68e807abad8cc09c0668f73e69287045372"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.903107 4932 generic.go:334] "Generic (PLEG): container finished" podID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerID="c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3" exitCode=0 Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.903201 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p69tc" event={"ID":"81ac7afd-2261-4af0-9b59-f18c98424c21","Type":"ContainerDied","Data":"c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.912846 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" event={"ID":"c823406a-c4f6-4335-be43-312c5336c730","Type":"ContainerStarted","Data":"157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.912886 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" event={"ID":"c823406a-c4f6-4335-be43-312c5336c730","Type":"ContainerStarted","Data":"f39ddfc2e987d91c204f3ba288daac538653177cbb76acaa1de063014d9d24bf"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.912984 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" podUID="c823406a-c4f6-4335-be43-312c5336c730" containerName="route-controller-manager" containerID="cri-o://157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d" gracePeriod=30 Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.913400 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.915752 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" event={"ID":"1d73072e-7e9b-4ae7-92ca-5950da33ed6c","Type":"ContainerStarted","Data":"e6a7284d67adc70d25e854c2aed04df089ab38032a06db87abea137d5f479fb6"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.917854 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dba97173-1fe4-4a77-acd6-ec71b7aea5b3","Type":"ContainerStarted","Data":"feb4a7d377ea26a35159f7d75865c2832616f9d3cb5a86adfb836dad5ee65129"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.922366 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" podUID="195f4a7f-a008-4ca1-96d4-771758b838b9" containerName="controller-manager" containerID="cri-o://d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce" gracePeriod=30 Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.922565 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" event={"ID":"195f4a7f-a008-4ca1-96d4-771758b838b9","Type":"ContainerStarted","Data":"d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.922608 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" event={"ID":"195f4a7f-a008-4ca1-96d4-771758b838b9","Type":"ContainerStarted","Data":"2fe701e682243488073f339c700bc104b7e5d361a8ce35793b4cf53c16c9c88b"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.923441 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.925857 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4w2tj" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.928213 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vwwjl" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.928264 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-78d5s" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.947948 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.005949 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" podStartSLOduration=31.005925443 podStartE2EDuration="31.005925443s" podCreationTimestamp="2026-02-18 19:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:09.976939227 +0000 UTC m=+193.558894072" watchObservedRunningTime="2026-02-18 19:37:10.005925443 +0000 UTC m=+193.587880288" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.034778 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" podStartSLOduration=31.034752095 podStartE2EDuration="31.034752095s" podCreationTimestamp="2026-02-18 19:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:10.032310171 +0000 UTC m=+193.614265016" watchObservedRunningTime="2026-02-18 19:37:10.034752095 +0000 UTC m=+193.616706940" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.109139 4932 patch_prober.go:28] interesting pod/route-controller-manager-68c79b7788-6k9bw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:53340->10.217.0.55:8443: read: connection reset by peer" start-of-body= Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.109214 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" podUID="c823406a-c4f6-4335-be43-312c5336c730" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:53340->10.217.0.55:8443: read: connection reset by peer" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.109680 4932 patch_prober.go:28] interesting pod/route-controller-manager-68c79b7788-6k9bw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.109735 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" podUID="c823406a-c4f6-4335-be43-312c5336c730" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.356745 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.394263 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-648d7854bd-2rffd"] Feb 18 19:37:10 crc kubenswrapper[4932]: E0218 19:37:10.394601 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195f4a7f-a008-4ca1-96d4-771758b838b9" containerName="controller-manager" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.394622 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="195f4a7f-a008-4ca1-96d4-771758b838b9" containerName="controller-manager" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.394771 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="195f4a7f-a008-4ca1-96d4-771758b838b9" containerName="controller-manager" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.395288 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.402425 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-648d7854bd-2rffd"] Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.498899 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-68c79b7788-6k9bw_c823406a-c4f6-4335-be43-312c5336c730/route-controller-manager/0.log" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.499256 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.499858 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-client-ca\") pod \"195f4a7f-a008-4ca1-96d4-771758b838b9\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.499928 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-config\") pod \"195f4a7f-a008-4ca1-96d4-771758b838b9\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.499980 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-proxy-ca-bundles\") pod \"195f4a7f-a008-4ca1-96d4-771758b838b9\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.500792 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "195f4a7f-a008-4ca1-96d4-771758b838b9" (UID: "195f4a7f-a008-4ca1-96d4-771758b838b9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.500855 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-client-ca" (OuterVolumeSpecName: "client-ca") pod "195f4a7f-a008-4ca1-96d4-771758b838b9" (UID: "195f4a7f-a008-4ca1-96d4-771758b838b9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.500873 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-config" (OuterVolumeSpecName: "config") pod "195f4a7f-a008-4ca1-96d4-771758b838b9" (UID: "195f4a7f-a008-4ca1-96d4-771758b838b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501268 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49xsb\" (UniqueName: \"kubernetes.io/projected/195f4a7f-a008-4ca1-96d4-771758b838b9-kube-api-access-49xsb\") pod \"195f4a7f-a008-4ca1-96d4-771758b838b9\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501296 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/195f4a7f-a008-4ca1-96d4-771758b838b9-serving-cert\") pod \"195f4a7f-a008-4ca1-96d4-771758b838b9\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501382 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-proxy-ca-bundles\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501427 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-config\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501461 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n7r6\" (UniqueName: \"kubernetes.io/projected/3f701e3b-5068-423b-ae72-2097ca900619-kube-api-access-5n7r6\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501479 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-client-ca\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501501 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f701e3b-5068-423b-ae72-2097ca900619-serving-cert\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501542 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501552 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501562 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.507619 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195f4a7f-a008-4ca1-96d4-771758b838b9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "195f4a7f-a008-4ca1-96d4-771758b838b9" (UID: "195f4a7f-a008-4ca1-96d4-771758b838b9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.508906 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195f4a7f-a008-4ca1-96d4-771758b838b9-kube-api-access-49xsb" (OuterVolumeSpecName: "kube-api-access-49xsb") pod "195f4a7f-a008-4ca1-96d4-771758b838b9" (UID: "195f4a7f-a008-4ca1-96d4-771758b838b9"). InnerVolumeSpecName "kube-api-access-49xsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.602626 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-config\") pod \"c823406a-c4f6-4335-be43-312c5336c730\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.602679 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c823406a-c4f6-4335-be43-312c5336c730-serving-cert\") pod \"c823406a-c4f6-4335-be43-312c5336c730\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.602702 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-client-ca\") pod \"c823406a-c4f6-4335-be43-312c5336c730\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.602766 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b627\" (UniqueName: \"kubernetes.io/projected/c823406a-c4f6-4335-be43-312c5336c730-kube-api-access-9b627\") pod \"c823406a-c4f6-4335-be43-312c5336c730\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.602919 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n7r6\" (UniqueName: \"kubernetes.io/projected/3f701e3b-5068-423b-ae72-2097ca900619-kube-api-access-5n7r6\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.602949 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-client-ca\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.602983 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f701e3b-5068-423b-ae72-2097ca900619-serving-cert\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.603034 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-proxy-ca-bundles\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.603077 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-config\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.603131 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49xsb\" (UniqueName: \"kubernetes.io/projected/195f4a7f-a008-4ca1-96d4-771758b838b9-kube-api-access-49xsb\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.603151 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/195f4a7f-a008-4ca1-96d4-771758b838b9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.604049 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-config" (OuterVolumeSpecName: "config") pod "c823406a-c4f6-4335-be43-312c5336c730" (UID: "c823406a-c4f6-4335-be43-312c5336c730"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.604669 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-config\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.605003 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-client-ca" (OuterVolumeSpecName: "client-ca") pod "c823406a-c4f6-4335-be43-312c5336c730" (UID: "c823406a-c4f6-4335-be43-312c5336c730"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.605376 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-client-ca\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.606320 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-proxy-ca-bundles\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.607581 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c823406a-c4f6-4335-be43-312c5336c730-kube-api-access-9b627" (OuterVolumeSpecName: "kube-api-access-9b627") pod "c823406a-c4f6-4335-be43-312c5336c730" (UID: "c823406a-c4f6-4335-be43-312c5336c730"). InnerVolumeSpecName "kube-api-access-9b627". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.610391 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c823406a-c4f6-4335-be43-312c5336c730-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c823406a-c4f6-4335-be43-312c5336c730" (UID: "c823406a-c4f6-4335-be43-312c5336c730"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.624473 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f701e3b-5068-423b-ae72-2097ca900619-serving-cert\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.629784 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n7r6\" (UniqueName: \"kubernetes.io/projected/3f701e3b-5068-423b-ae72-2097ca900619-kube-api-access-5n7r6\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.704387 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b627\" (UniqueName: \"kubernetes.io/projected/c823406a-c4f6-4335-be43-312c5336c730-kube-api-access-9b627\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.704434 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.704453 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.704472 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c823406a-c4f6-4335-be43-312c5336c730-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.799927 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.944460 4932 generic.go:334] "Generic (PLEG): container finished" podID="195f4a7f-a008-4ca1-96d4-771758b838b9" containerID="d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce" exitCode=0 Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.944504 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" event={"ID":"195f4a7f-a008-4ca1-96d4-771758b838b9","Type":"ContainerDied","Data":"d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce"} Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.944810 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" event={"ID":"195f4a7f-a008-4ca1-96d4-771758b838b9","Type":"ContainerDied","Data":"2fe701e682243488073f339c700bc104b7e5d361a8ce35793b4cf53c16c9c88b"} Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.944548 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.944834 4932 scope.go:117] "RemoveContainer" containerID="d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.950701 4932 generic.go:334] "Generic (PLEG): container finished" podID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerID="fb361fdaea379654dbc86cd68517d68e807abad8cc09c0668f73e69287045372" exitCode=0 Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.950784 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chh8j" event={"ID":"ce921030-ec82-420d-a9e7-cd04ee7e055b","Type":"ContainerDied","Data":"fb361fdaea379654dbc86cd68517d68e807abad8cc09c0668f73e69287045372"} Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.959966 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p69tc" event={"ID":"81ac7afd-2261-4af0-9b59-f18c98424c21","Type":"ContainerStarted","Data":"a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2"} Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.970436 4932 scope.go:117] "RemoveContainer" containerID="d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce" Feb 18 19:37:10 crc kubenswrapper[4932]: E0218 19:37:10.976660 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce\": container with ID starting with d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce not found: ID does not exist" containerID="d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.976740 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce"} err="failed to get container status \"d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce\": rpc error: code = NotFound desc = could not find container \"d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce\": container with ID starting with d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce not found: ID does not exist" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.990782 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-68c79b7788-6k9bw_c823406a-c4f6-4335-be43-312c5336c730/route-controller-manager/0.log" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.990859 4932 generic.go:334] "Generic (PLEG): container finished" podID="c823406a-c4f6-4335-be43-312c5336c730" containerID="157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d" exitCode=255 Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.990990 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" event={"ID":"c823406a-c4f6-4335-be43-312c5336c730","Type":"ContainerDied","Data":"157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d"} Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.991031 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" event={"ID":"c823406a-c4f6-4335-be43-312c5336c730","Type":"ContainerDied","Data":"f39ddfc2e987d91c204f3ba288daac538653177cbb76acaa1de063014d9d24bf"} Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.991060 4932 scope.go:117] "RemoveContainer" containerID="157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.991088 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.004665 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p69tc" podStartSLOduration=2.6309662510000003 podStartE2EDuration="45.004646255s" podCreationTimestamp="2026-02-18 19:36:26 +0000 UTC" firstStartedPulling="2026-02-18 19:36:28.004485406 +0000 UTC m=+151.586440261" lastFinishedPulling="2026-02-18 19:37:10.37816543 +0000 UTC m=+193.960120265" observedRunningTime="2026-02-18 19:37:10.999466479 +0000 UTC m=+194.581421344" watchObservedRunningTime="2026-02-18 19:37:11.004646255 +0000 UTC m=+194.586601100" Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.004846 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" event={"ID":"1d73072e-7e9b-4ae7-92ca-5950da33ed6c","Type":"ContainerStarted","Data":"61052ec9d7c58c38600c3eb083a79cedb4677f18cee7a1f55eb74c4fddfc76dd"} Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.004944 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" event={"ID":"1d73072e-7e9b-4ae7-92ca-5950da33ed6c","Type":"ContainerStarted","Data":"b26b828b3d627427637f6dba4bc8e7c635d0c8fa26d6e91863152aef240179a8"} Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.008254 4932 generic.go:334] "Generic (PLEG): container finished" podID="dba97173-1fe4-4a77-acd6-ec71b7aea5b3" containerID="54669d850ab4e2c576ace2b30a4fd353020f94b96a65cf27707838b8b12d61bb" exitCode=0 Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.009461 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dba97173-1fe4-4a77-acd6-ec71b7aea5b3","Type":"ContainerDied","Data":"54669d850ab4e2c576ace2b30a4fd353020f94b96a65cf27707838b8b12d61bb"} Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.020385 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc"] Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.025193 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc"] Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.026311 4932 scope.go:117] "RemoveContainer" containerID="157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d" Feb 18 19:37:11 crc kubenswrapper[4932]: E0218 19:37:11.027219 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d\": container with ID starting with 157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d not found: ID does not exist" containerID="157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d" Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.027257 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d"} err="failed to get container status \"157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d\": rpc error: code = NotFound desc = could not find container \"157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d\": container with ID starting with 157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d not found: ID does not exist" Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.058307 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kdjbt" podStartSLOduration=171.058292891 podStartE2EDuration="2m51.058292891s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:11.052537612 +0000 UTC m=+194.634492467" watchObservedRunningTime="2026-02-18 19:37:11.058292891 +0000 UTC m=+194.640247736" Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.069128 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw"] Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.072527 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw"] Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.080566 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-648d7854bd-2rffd"] Feb 18 19:37:11 crc kubenswrapper[4932]: W0218 19:37:11.081046 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f701e3b_5068_423b_ae72_2097ca900619.slice/crio-ffc89283d0d1b05c0e852a5ff74828280f4f0bd46e1714810111665fecf8f740 WatchSource:0}: Error finding container ffc89283d0d1b05c0e852a5ff74828280f4f0bd46e1714810111665fecf8f740: Status 404 returned error can't find the container with id ffc89283d0d1b05c0e852a5ff74828280f4f0bd46e1714810111665fecf8f740 Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.187620 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195f4a7f-a008-4ca1-96d4-771758b838b9" path="/var/lib/kubelet/pods/195f4a7f-a008-4ca1-96d4-771758b838b9/volumes" Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.188303 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c823406a-c4f6-4335-be43-312c5336c730" path="/var/lib/kubelet/pods/c823406a-c4f6-4335-be43-312c5336c730/volumes" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.020987 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chh8j" event={"ID":"ce921030-ec82-420d-a9e7-cd04ee7e055b","Type":"ContainerStarted","Data":"e34e27d0659e0d99e6372515305dc5e1613a602751683fd615bb6bd8747d32f2"} Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.027523 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" event={"ID":"3f701e3b-5068-423b-ae72-2097ca900619","Type":"ContainerStarted","Data":"07c51006436dce5c79f6a0ca9587b0474d8d3d2fbf6ac368abfe60f9fc273e20"} Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.027589 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" event={"ID":"3f701e3b-5068-423b-ae72-2097ca900619","Type":"ContainerStarted","Data":"ffc89283d0d1b05c0e852a5ff74828280f4f0bd46e1714810111665fecf8f740"} Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.048342 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-chh8j" podStartSLOduration=3.797922683 podStartE2EDuration="43.048325749s" podCreationTimestamp="2026-02-18 19:36:29 +0000 UTC" firstStartedPulling="2026-02-18 19:36:32.131076919 +0000 UTC m=+155.713031764" lastFinishedPulling="2026-02-18 19:37:11.381479975 +0000 UTC m=+194.963434830" observedRunningTime="2026-02-18 19:37:12.046408356 +0000 UTC m=+195.628363221" watchObservedRunningTime="2026-02-18 19:37:12.048325749 +0000 UTC m=+195.630280594" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.263468 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.286030 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" podStartSLOduration=13.285987577 podStartE2EDuration="13.285987577s" podCreationTimestamp="2026-02-18 19:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:12.076437816 +0000 UTC m=+195.658392671" watchObservedRunningTime="2026-02-18 19:37:12.285987577 +0000 UTC m=+195.867942422" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.340850 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kube-api-access\") pod \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.340927 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kubelet-dir\") pod \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.341084 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dba97173-1fe4-4a77-acd6-ec71b7aea5b3" (UID: "dba97173-1fe4-4a77-acd6-ec71b7aea5b3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.341452 4932 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.348338 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dba97173-1fe4-4a77-acd6-ec71b7aea5b3" (UID: "dba97173-1fe4-4a77-acd6-ec71b7aea5b3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.442781 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.634090 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw"] Feb 18 19:37:12 crc kubenswrapper[4932]: E0218 19:37:12.634335 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c823406a-c4f6-4335-be43-312c5336c730" containerName="route-controller-manager" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.634348 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c823406a-c4f6-4335-be43-312c5336c730" containerName="route-controller-manager" Feb 18 19:37:12 crc kubenswrapper[4932]: E0218 19:37:12.634362 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dba97173-1fe4-4a77-acd6-ec71b7aea5b3" containerName="pruner" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.634368 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="dba97173-1fe4-4a77-acd6-ec71b7aea5b3" containerName="pruner" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.634476 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="dba97173-1fe4-4a77-acd6-ec71b7aea5b3" containerName="pruner" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.634487 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c823406a-c4f6-4335-be43-312c5336c730" containerName="route-controller-manager" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.634831 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.637178 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.637219 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.637327 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.637396 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.637435 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.637711 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.644572 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-client-ca\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.644613 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-serving-cert\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.644654 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-config\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.644705 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88l4b\" (UniqueName: \"kubernetes.io/projected/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-kube-api-access-88l4b\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.645786 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw"] Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.745087 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-client-ca\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.745142 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-serving-cert\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.745206 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-config\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.745233 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88l4b\" (UniqueName: \"kubernetes.io/projected/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-kube-api-access-88l4b\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.746639 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-config\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.746678 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-client-ca\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.749777 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-serving-cert\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.769950 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88l4b\" (UniqueName: \"kubernetes.io/projected/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-kube-api-access-88l4b\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.951569 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.049409 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.051295 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dba97173-1fe4-4a77-acd6-ec71b7aea5b3","Type":"ContainerDied","Data":"feb4a7d377ea26a35159f7d75865c2832616f9d3cb5a86adfb836dad5ee65129"} Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.051436 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feb4a7d377ea26a35159f7d75865c2832616f9d3cb5a86adfb836dad5ee65129" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.052032 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.060432 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.370880 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw"] Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.581929 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.583706 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.586176 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.586472 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.588151 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.660039 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kube-api-access\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.660144 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.660205 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-var-lock\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.761537 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-var-lock\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.761608 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kube-api-access\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.761662 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.761759 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.761760 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-var-lock\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.781665 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kube-api-access\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.902098 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:14 crc kubenswrapper[4932]: I0218 19:37:14.099605 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xnxl9"] Feb 18 19:37:14 crc kubenswrapper[4932]: I0218 19:37:14.108277 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" event={"ID":"d1eaf5e6-7318-4473-8317-8a38fcca1fdc","Type":"ContainerStarted","Data":"1bde042b0eca7d25e70e8bc8f868a4af5d16d9cfdbec831cd7e71b1619585a03"} Feb 18 19:37:14 crc kubenswrapper[4932]: I0218 19:37:14.108859 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" event={"ID":"d1eaf5e6-7318-4473-8317-8a38fcca1fdc","Type":"ContainerStarted","Data":"ae3a5e90285132f6077bd152728f0c93ddc5e392da325f1bb2715b4f11c105b6"} Feb 18 19:37:14 crc kubenswrapper[4932]: I0218 19:37:14.109491 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:14 crc kubenswrapper[4932]: I0218 19:37:14.155667 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" podStartSLOduration=15.155648527 podStartE2EDuration="15.155648527s" podCreationTimestamp="2026-02-18 19:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:14.153080025 +0000 UTC m=+197.735034870" watchObservedRunningTime="2026-02-18 19:37:14.155648527 +0000 UTC m=+197.737603372" Feb 18 19:37:14 crc kubenswrapper[4932]: I0218 19:37:14.246711 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:37:14 crc kubenswrapper[4932]: I0218 19:37:14.366959 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:15 crc kubenswrapper[4932]: I0218 19:37:15.115549 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b38b0e86-4a7b-4436-a0ef-565a61a1eab4","Type":"ContainerStarted","Data":"89f0774e9a169a85e00453d4419c3e930e811396c9527b57c8e29093ef32ec9f"} Feb 18 19:37:15 crc kubenswrapper[4932]: I0218 19:37:15.115622 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b38b0e86-4a7b-4436-a0ef-565a61a1eab4","Type":"ContainerStarted","Data":"47e12ed4376656b94af9a3460a8df57cde49986c200ba6e60e8d0c9fbcd288a4"} Feb 18 19:37:15 crc kubenswrapper[4932]: I0218 19:37:15.134909 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.134888468 podStartE2EDuration="2.134888468s" podCreationTimestamp="2026-02-18 19:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:15.130842011 +0000 UTC m=+198.712796856" watchObservedRunningTime="2026-02-18 19:37:15.134888468 +0000 UTC m=+198.716843343" Feb 18 19:37:17 crc kubenswrapper[4932]: I0218 19:37:17.280005 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:37:17 crc kubenswrapper[4932]: I0218 19:37:17.280473 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:37:17 crc kubenswrapper[4932]: I0218 19:37:17.446829 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:37:18 crc kubenswrapper[4932]: I0218 19:37:18.135113 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2xgw" event={"ID":"62bbf001-ce57-471f-ad28-1d892d0d30e9","Type":"ContainerStarted","Data":"5fa3af86ad8e20edc339dfb0d7d75e1dba3410f262c6355782e4c035746708c1"} Feb 18 19:37:18 crc kubenswrapper[4932]: I0218 19:37:18.178907 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:37:18 crc kubenswrapper[4932]: I0218 19:37:18.415148 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p69tc"] Feb 18 19:37:19 crc kubenswrapper[4932]: I0218 19:37:19.143554 4932 generic.go:334] "Generic (PLEG): container finished" podID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerID="5fa3af86ad8e20edc339dfb0d7d75e1dba3410f262c6355782e4c035746708c1" exitCode=0 Feb 18 19:37:19 crc kubenswrapper[4932]: I0218 19:37:19.143634 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2xgw" event={"ID":"62bbf001-ce57-471f-ad28-1d892d0d30e9","Type":"ContainerDied","Data":"5fa3af86ad8e20edc339dfb0d7d75e1dba3410f262c6355782e4c035746708c1"} Feb 18 19:37:19 crc kubenswrapper[4932]: I0218 19:37:19.916634 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:37:19 crc kubenswrapper[4932]: I0218 19:37:19.918273 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:37:19 crc kubenswrapper[4932]: I0218 19:37:19.958409 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.149938 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbkr8" event={"ID":"29a4229b-f53b-4cd7-b81b-7fc2dfded045","Type":"ContainerStarted","Data":"cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508"} Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.151666 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2xgw" event={"ID":"62bbf001-ce57-471f-ad28-1d892d0d30e9","Type":"ContainerStarted","Data":"13c758cf33ac2064fd2a2bac98c4ca52868f7188bbf8e3e8b926c0341705af4b"} Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.151981 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p69tc" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="registry-server" containerID="cri-o://a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2" gracePeriod=2 Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.186628 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j2xgw" podStartSLOduration=2.636009801 podStartE2EDuration="54.186611517s" podCreationTimestamp="2026-02-18 19:36:26 +0000 UTC" firstStartedPulling="2026-02-18 19:36:28.001909378 +0000 UTC m=+151.583864233" lastFinishedPulling="2026-02-18 19:37:19.552511104 +0000 UTC m=+203.134465949" observedRunningTime="2026-02-18 19:37:20.185392098 +0000 UTC m=+203.767346943" watchObservedRunningTime="2026-02-18 19:37:20.186611517 +0000 UTC m=+203.768566362" Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.195489 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.856946 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.868659 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-catalog-content\") pod \"81ac7afd-2261-4af0-9b59-f18c98424c21\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.868790 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-utilities\") pod \"81ac7afd-2261-4af0-9b59-f18c98424c21\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.868859 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvppw\" (UniqueName: \"kubernetes.io/projected/81ac7afd-2261-4af0-9b59-f18c98424c21-kube-api-access-vvppw\") pod \"81ac7afd-2261-4af0-9b59-f18c98424c21\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.869455 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-utilities" (OuterVolumeSpecName: "utilities") pod "81ac7afd-2261-4af0-9b59-f18c98424c21" (UID: "81ac7afd-2261-4af0-9b59-f18c98424c21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.876001 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ac7afd-2261-4af0-9b59-f18c98424c21-kube-api-access-vvppw" (OuterVolumeSpecName: "kube-api-access-vvppw") pod "81ac7afd-2261-4af0-9b59-f18c98424c21" (UID: "81ac7afd-2261-4af0-9b59-f18c98424c21"). InnerVolumeSpecName "kube-api-access-vvppw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.970114 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.970145 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvppw\" (UniqueName: \"kubernetes.io/projected/81ac7afd-2261-4af0-9b59-f18c98424c21-kube-api-access-vvppw\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.104900 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81ac7afd-2261-4af0-9b59-f18c98424c21" (UID: "81ac7afd-2261-4af0-9b59-f18c98424c21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.159904 4932 generic.go:334] "Generic (PLEG): container finished" podID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerID="a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2" exitCode=0 Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.159971 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p69tc" event={"ID":"81ac7afd-2261-4af0-9b59-f18c98424c21","Type":"ContainerDied","Data":"a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2"} Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.159998 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p69tc" event={"ID":"81ac7afd-2261-4af0-9b59-f18c98424c21","Type":"ContainerDied","Data":"c390b6f5bfce7b21488ea351096dff0534a3fb41e4e604cf85b8536016c29379"} Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.160014 4932 scope.go:117] "RemoveContainer" containerID="a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.160077 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.166966 4932 generic.go:334] "Generic (PLEG): container finished" podID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerID="cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508" exitCode=0 Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.167075 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbkr8" event={"ID":"29a4229b-f53b-4cd7-b81b-7fc2dfded045","Type":"ContainerDied","Data":"cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508"} Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.170977 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.184294 4932 scope.go:117] "RemoveContainer" containerID="c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.222289 4932 scope.go:117] "RemoveContainer" containerID="1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.237018 4932 scope.go:117] "RemoveContainer" containerID="a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2" Feb 18 19:37:21 crc kubenswrapper[4932]: E0218 19:37:21.242166 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2\": container with ID starting with a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2 not found: ID does not exist" containerID="a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.242227 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2"} err="failed to get container status \"a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2\": rpc error: code = NotFound desc = could not find container \"a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2\": container with ID starting with a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2 not found: ID does not exist" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.242253 4932 scope.go:117] "RemoveContainer" containerID="c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3" Feb 18 19:37:21 crc kubenswrapper[4932]: E0218 19:37:21.243138 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3\": container with ID starting with c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3 not found: ID does not exist" containerID="c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.243202 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3"} err="failed to get container status \"c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3\": rpc error: code = NotFound desc = could not find container \"c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3\": container with ID starting with c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3 not found: ID does not exist" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.243243 4932 scope.go:117] "RemoveContainer" containerID="1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547" Feb 18 19:37:21 crc kubenswrapper[4932]: E0218 19:37:21.244244 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547\": container with ID starting with 1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547 not found: ID does not exist" containerID="1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.244291 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547"} err="failed to get container status \"1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547\": rpc error: code = NotFound desc = could not find container \"1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547\": container with ID starting with 1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547 not found: ID does not exist" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.246927 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p69tc"] Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.250152 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p69tc"] Feb 18 19:37:23 crc kubenswrapper[4932]: I0218 19:37:23.187912 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" path="/var/lib/kubelet/pods/81ac7afd-2261-4af0-9b59-f18c98424c21/volumes" Feb 18 19:37:26 crc kubenswrapper[4932]: I0218 19:37:26.850764 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:37:26 crc kubenswrapper[4932]: I0218 19:37:26.851335 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:37:26 crc kubenswrapper[4932]: I0218 19:37:26.903619 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:37:27 crc kubenswrapper[4932]: I0218 19:37:27.250052 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:37:27 crc kubenswrapper[4932]: I0218 19:37:27.607191 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:37:27 crc kubenswrapper[4932]: I0218 19:37:27.607277 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:37:27 crc kubenswrapper[4932]: I0218 19:37:27.607338 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:37:27 crc kubenswrapper[4932]: I0218 19:37:27.608058 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:37:27 crc kubenswrapper[4932]: I0218 19:37:27.608135 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e" gracePeriod=600 Feb 18 19:37:28 crc kubenswrapper[4932]: I0218 19:37:28.217460 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e" exitCode=0 Feb 18 19:37:28 crc kubenswrapper[4932]: I0218 19:37:28.217566 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e"} Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.228842 4932 generic.go:334] "Generic (PLEG): container finished" podID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerID="0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.228931 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78d5s" event={"ID":"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75","Type":"ContainerDied","Data":"0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af"} Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.241621 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbkr8" event={"ID":"29a4229b-f53b-4cd7-b81b-7fc2dfded045","Type":"ContainerStarted","Data":"3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea"} Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.250593 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"1f6c0fd0c3107fc39e9f403b60bf7cadd547322feaa279357c61854210904894"} Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.253479 4932 generic.go:334] "Generic (PLEG): container finished" podID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerID="399844cbfb1eed438dbae81663b568d5834893c25f35e7193be65debdd42cfaa" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.253575 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w2tj" event={"ID":"b77a623a-ff2e-45aa-9004-b211b0200a3f","Type":"ContainerDied","Data":"399844cbfb1eed438dbae81663b568d5834893c25f35e7193be65debdd42cfaa"} Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.257785 4932 generic.go:334] "Generic (PLEG): container finished" podID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerID="615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.257895 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvwc8" event={"ID":"cafe1e82-ef19-4345-825e-cc9bf016b353","Type":"ContainerDied","Data":"615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13"} Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.261535 4932 generic.go:334] "Generic (PLEG): container finished" podID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerID="e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.261605 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwwjl" event={"ID":"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a","Type":"ContainerDied","Data":"e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0"} Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.295694 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gbkr8" podStartSLOduration=5.910692654 podStartE2EDuration="1m3.295673028s" podCreationTimestamp="2026-02-18 19:36:26 +0000 UTC" firstStartedPulling="2026-02-18 19:36:28.006755906 +0000 UTC m=+151.588710761" lastFinishedPulling="2026-02-18 19:37:25.39173629 +0000 UTC m=+208.973691135" observedRunningTime="2026-02-18 19:37:29.289128489 +0000 UTC m=+212.871083334" watchObservedRunningTime="2026-02-18 19:37:29.295673028 +0000 UTC m=+212.877627883" Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.271835 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwwjl" event={"ID":"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a","Type":"ContainerStarted","Data":"2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5"} Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.276521 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78d5s" event={"ID":"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75","Type":"ContainerStarted","Data":"d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531"} Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.279832 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w2tj" event={"ID":"b77a623a-ff2e-45aa-9004-b211b0200a3f","Type":"ContainerStarted","Data":"8629bd2837aebb06f17bda76bfe6b4989212f8b67eec3674f76174649de59a2e"} Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.286011 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvwc8" event={"ID":"cafe1e82-ef19-4345-825e-cc9bf016b353","Type":"ContainerStarted","Data":"6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26"} Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.293258 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vwwjl" podStartSLOduration=4.58082859 podStartE2EDuration="1m2.293234713s" podCreationTimestamp="2026-02-18 19:36:28 +0000 UTC" firstStartedPulling="2026-02-18 19:36:32.127320435 +0000 UTC m=+155.709275280" lastFinishedPulling="2026-02-18 19:37:29.839726558 +0000 UTC m=+213.421681403" observedRunningTime="2026-02-18 19:37:30.29229964 +0000 UTC m=+213.874254515" watchObservedRunningTime="2026-02-18 19:37:30.293234713 +0000 UTC m=+213.875189558" Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.314776 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-78d5s" podStartSLOduration=3.724147179 podStartE2EDuration="1m1.314750184s" podCreationTimestamp="2026-02-18 19:36:29 +0000 UTC" firstStartedPulling="2026-02-18 19:36:32.135326804 +0000 UTC m=+155.717281649" lastFinishedPulling="2026-02-18 19:37:29.725929789 +0000 UTC m=+213.307884654" observedRunningTime="2026-02-18 19:37:30.314713174 +0000 UTC m=+213.896668049" watchObservedRunningTime="2026-02-18 19:37:30.314750184 +0000 UTC m=+213.896705049" Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.357010 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qvwc8" podStartSLOduration=2.558596839 podStartE2EDuration="1m4.356986508s" podCreationTimestamp="2026-02-18 19:36:26 +0000 UTC" firstStartedPulling="2026-02-18 19:36:27.99972549 +0000 UTC m=+151.581680345" lastFinishedPulling="2026-02-18 19:37:29.798115149 +0000 UTC m=+213.380070014" observedRunningTime="2026-02-18 19:37:30.34304168 +0000 UTC m=+213.924996535" watchObservedRunningTime="2026-02-18 19:37:30.356986508 +0000 UTC m=+213.938941353" Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.368639 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4w2tj" podStartSLOduration=3.741856795 podStartE2EDuration="1m2.36861387s" podCreationTimestamp="2026-02-18 19:36:28 +0000 UTC" firstStartedPulling="2026-02-18 19:36:31.070276813 +0000 UTC m=+154.652231658" lastFinishedPulling="2026-02-18 19:37:29.697033848 +0000 UTC m=+213.278988733" observedRunningTime="2026-02-18 19:37:30.364139142 +0000 UTC m=+213.946094007" watchObservedRunningTime="2026-02-18 19:37:30.36861387 +0000 UTC m=+213.950568735" Feb 18 19:37:36 crc kubenswrapper[4932]: I0218 19:37:36.657406 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:37:36 crc kubenswrapper[4932]: I0218 19:37:36.658107 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:37:36 crc kubenswrapper[4932]: I0218 19:37:36.730946 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:37:37 crc kubenswrapper[4932]: I0218 19:37:37.076411 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:37:37 crc kubenswrapper[4932]: I0218 19:37:37.076555 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:37:37 crc kubenswrapper[4932]: I0218 19:37:37.129815 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:37:37 crc kubenswrapper[4932]: I0218 19:37:37.401776 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:37:37 crc kubenswrapper[4932]: I0218 19:37:37.412709 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:37:37 crc kubenswrapper[4932]: I0218 19:37:37.971228 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gbkr8"] Feb 18 19:37:38 crc kubenswrapper[4932]: I0218 19:37:38.854417 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:37:38 crc kubenswrapper[4932]: I0218 19:37:38.854459 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:37:38 crc kubenswrapper[4932]: I0218 19:37:38.911151 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.165349 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" podUID="215a0eae-8c5b-4b0e-86f6-056bc6f696ff" containerName="oauth-openshift" containerID="cri-o://6382a3d82fd4779d69e56bae634baaed056f7a56ccaabda7fcfd83e4fe75fc34" gracePeriod=15 Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.294954 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.294998 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.335476 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.360139 4932 generic.go:334] "Generic (PLEG): container finished" podID="215a0eae-8c5b-4b0e-86f6-056bc6f696ff" containerID="6382a3d82fd4779d69e56bae634baaed056f7a56ccaabda7fcfd83e4fe75fc34" exitCode=0 Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.360678 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" event={"ID":"215a0eae-8c5b-4b0e-86f6-056bc6f696ff","Type":"ContainerDied","Data":"6382a3d82fd4779d69e56bae634baaed056f7a56ccaabda7fcfd83e4fe75fc34"} Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.360807 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gbkr8" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="registry-server" containerID="cri-o://3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea" gracePeriod=2 Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.395953 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.398713 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.647037 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.658796 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-dir\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.658876 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-router-certs\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.658915 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-provider-selection\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.658966 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-idp-0-file-data\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659004 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-trusted-ca-bundle\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659034 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-service-ca\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659065 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-login\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659129 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-policies\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659166 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-cliconfig\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659225 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659235 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-session\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659331 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-ocp-branding-template\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659359 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-error\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659392 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmwdb\" (UniqueName: \"kubernetes.io/projected/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-kube-api-access-kmwdb\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659437 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-serving-cert\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659870 4932 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.665934 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.666051 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-kube-api-access-kmwdb" (OuterVolumeSpecName: "kube-api-access-kmwdb") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "kube-api-access-kmwdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.666211 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.666622 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.666698 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.666827 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.667509 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.669062 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.679759 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.688232 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-59cd769dfc-kdxhn"] Feb 18 19:37:39 crc kubenswrapper[4932]: E0218 19:37:39.688467 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="extract-utilities" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.688480 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="extract-utilities" Feb 18 19:37:39 crc kubenswrapper[4932]: E0218 19:37:39.688492 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215a0eae-8c5b-4b0e-86f6-056bc6f696ff" containerName="oauth-openshift" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.688501 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="215a0eae-8c5b-4b0e-86f6-056bc6f696ff" containerName="oauth-openshift" Feb 18 19:37:39 crc kubenswrapper[4932]: E0218 19:37:39.688512 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="extract-content" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.688520 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="extract-content" Feb 18 19:37:39 crc kubenswrapper[4932]: E0218 19:37:39.688537 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="registry-server" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.688546 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="registry-server" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.688674 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="215a0eae-8c5b-4b0e-86f6-056bc6f696ff" containerName="oauth-openshift" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.688691 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="registry-server" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.689123 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.691536 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.692415 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.700391 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.700789 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.703995 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-59cd769dfc-kdxhn"] Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.744244 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-648d7854bd-2rffd"] Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.744477 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" podUID="3f701e3b-5068-423b-ae72-2097ca900619" containerName="controller-manager" containerID="cri-o://07c51006436dce5c79f6a0ca9587b0474d8d3d2fbf6ac368abfe60f9fc273e20" gracePeriod=30 Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.760810 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a467e296-550a-46dd-b346-358df4c6ad1d-audit-dir\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.760865 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-session\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.760891 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.760916 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.760947 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.760974 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-login\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.760998 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761018 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksscj\" (UniqueName: \"kubernetes.io/projected/a467e296-550a-46dd-b346-358df4c6ad1d-kube-api-access-ksscj\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761046 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-router-certs\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761068 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761090 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761110 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-audit-policies\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761147 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-service-ca\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761198 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-error\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761252 4932 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761267 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761279 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761291 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761303 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761314 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmwdb\" (UniqueName: \"kubernetes.io/projected/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-kube-api-access-kmwdb\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761327 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761339 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761351 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761362 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761373 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761386 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761398 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.822008 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw"] Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.822224 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" podUID="d1eaf5e6-7318-4473-8317-8a38fcca1fdc" containerName="route-controller-manager" containerID="cri-o://1bde042b0eca7d25e70e8bc8f868a4af5d16d9cfdbec831cd7e71b1619585a03" gracePeriod=30 Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.843529 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862036 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-catalog-content\") pod \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862114 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-utilities\") pod \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862258 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hghw\" (UniqueName: \"kubernetes.io/projected/29a4229b-f53b-4cd7-b81b-7fc2dfded045-kube-api-access-6hghw\") pod \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862437 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-error\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862493 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a467e296-550a-46dd-b346-358df4c6ad1d-audit-dir\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862523 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-session\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862548 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862574 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862595 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862618 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-login\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862636 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862652 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksscj\" (UniqueName: \"kubernetes.io/projected/a467e296-550a-46dd-b346-358df4c6ad1d-kube-api-access-ksscj\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862677 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-router-certs\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862701 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862722 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862745 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-audit-policies\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862785 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-service-ca\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862943 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-utilities" (OuterVolumeSpecName: "utilities") pod "29a4229b-f53b-4cd7-b81b-7fc2dfded045" (UID: "29a4229b-f53b-4cd7-b81b-7fc2dfded045"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.863509 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-service-ca\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.863551 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.863664 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.864264 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a467e296-550a-46dd-b346-358df4c6ad1d-audit-dir\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.864411 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-audit-policies\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.865342 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a4229b-f53b-4cd7-b81b-7fc2dfded045-kube-api-access-6hghw" (OuterVolumeSpecName: "kube-api-access-6hghw") pod "29a4229b-f53b-4cd7-b81b-7fc2dfded045" (UID: "29a4229b-f53b-4cd7-b81b-7fc2dfded045"). InnerVolumeSpecName "kube-api-access-6hghw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.866309 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-error\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.866434 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-login\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.866729 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.867333 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.867420 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.868013 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-session\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.871572 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-router-certs\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.874801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.881203 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksscj\" (UniqueName: \"kubernetes.io/projected/a467e296-550a-46dd-b346-358df4c6ad1d-kube-api-access-ksscj\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.921751 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29a4229b-f53b-4cd7-b81b-7fc2dfded045" (UID: "29a4229b-f53b-4cd7-b81b-7fc2dfded045"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.964106 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.964139 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.964149 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hghw\" (UniqueName: \"kubernetes.io/projected/29a4229b-f53b-4cd7-b81b-7fc2dfded045-kube-api-access-6hghw\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.027109 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.266644 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.266694 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.308008 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.367711 4932 generic.go:334] "Generic (PLEG): container finished" podID="d1eaf5e6-7318-4473-8317-8a38fcca1fdc" containerID="1bde042b0eca7d25e70e8bc8f868a4af5d16d9cfdbec831cd7e71b1619585a03" exitCode=0 Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.367836 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" event={"ID":"d1eaf5e6-7318-4473-8317-8a38fcca1fdc","Type":"ContainerDied","Data":"1bde042b0eca7d25e70e8bc8f868a4af5d16d9cfdbec831cd7e71b1619585a03"} Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.369089 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" event={"ID":"215a0eae-8c5b-4b0e-86f6-056bc6f696ff","Type":"ContainerDied","Data":"62838236ab987cac95945631bbd754af35252c7b859d7a4d83e36fd02b26a5f7"} Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.369130 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.369151 4932 scope.go:117] "RemoveContainer" containerID="6382a3d82fd4779d69e56bae634baaed056f7a56ccaabda7fcfd83e4fe75fc34" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.372732 4932 generic.go:334] "Generic (PLEG): container finished" podID="3f701e3b-5068-423b-ae72-2097ca900619" containerID="07c51006436dce5c79f6a0ca9587b0474d8d3d2fbf6ac368abfe60f9fc273e20" exitCode=0 Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.372786 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" event={"ID":"3f701e3b-5068-423b-ae72-2097ca900619","Type":"ContainerDied","Data":"07c51006436dce5c79f6a0ca9587b0474d8d3d2fbf6ac368abfe60f9fc273e20"} Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.376729 4932 generic.go:334] "Generic (PLEG): container finished" podID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerID="3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea" exitCode=0 Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.376770 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.376871 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbkr8" event={"ID":"29a4229b-f53b-4cd7-b81b-7fc2dfded045","Type":"ContainerDied","Data":"3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea"} Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.376903 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbkr8" event={"ID":"29a4229b-f53b-4cd7-b81b-7fc2dfded045","Type":"ContainerDied","Data":"a2c366de25c0453f7a2db8d06c018b6056eb68e4c159566103c144c6b3b72029"} Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.388205 4932 scope.go:117] "RemoveContainer" containerID="3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.410626 4932 scope.go:117] "RemoveContainer" containerID="cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.414971 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xnxl9"] Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.421334 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xnxl9"] Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.424746 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.442290 4932 scope.go:117] "RemoveContainer" containerID="a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.446905 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gbkr8"] Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.449756 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gbkr8"] Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.456213 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-59cd769dfc-kdxhn"] Feb 18 19:37:40 crc kubenswrapper[4932]: W0218 19:37:40.482591 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda467e296_550a_46dd_b346_358df4c6ad1d.slice/crio-29791eced4e361d01f780e5edec4a59afe38a2a75a01024ccf3337cde6ebf796 WatchSource:0}: Error finding container 29791eced4e361d01f780e5edec4a59afe38a2a75a01024ccf3337cde6ebf796: Status 404 returned error can't find the container with id 29791eced4e361d01f780e5edec4a59afe38a2a75a01024ccf3337cde6ebf796 Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.489361 4932 scope.go:117] "RemoveContainer" containerID="3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea" Feb 18 19:37:40 crc kubenswrapper[4932]: E0218 19:37:40.490113 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea\": container with ID starting with 3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea not found: ID does not exist" containerID="3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.490147 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea"} err="failed to get container status \"3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea\": rpc error: code = NotFound desc = could not find container \"3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea\": container with ID starting with 3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea not found: ID does not exist" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.490188 4932 scope.go:117] "RemoveContainer" containerID="cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508" Feb 18 19:37:40 crc kubenswrapper[4932]: E0218 19:37:40.490499 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508\": container with ID starting with cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508 not found: ID does not exist" containerID="cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.490518 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508"} err="failed to get container status \"cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508\": rpc error: code = NotFound desc = could not find container \"cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508\": container with ID starting with cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508 not found: ID does not exist" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.490533 4932 scope.go:117] "RemoveContainer" containerID="a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3" Feb 18 19:37:40 crc kubenswrapper[4932]: E0218 19:37:40.491015 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3\": container with ID starting with a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3 not found: ID does not exist" containerID="a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.491032 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3"} err="failed to get container status \"a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3\": rpc error: code = NotFound desc = could not find container \"a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3\": container with ID starting with a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3 not found: ID does not exist" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.800756 4932 patch_prober.go:28] interesting pod/controller-manager-648d7854bd-2rffd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.800813 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" podUID="3f701e3b-5068-423b-ae72-2097ca900619" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.017150 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037239 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6"] Feb 18 19:37:41 crc kubenswrapper[4932]: E0218 19:37:41.037426 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="extract-content" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037437 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="extract-content" Feb 18 19:37:41 crc kubenswrapper[4932]: E0218 19:37:41.037453 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="extract-utilities" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037460 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="extract-utilities" Feb 18 19:37:41 crc kubenswrapper[4932]: E0218 19:37:41.037469 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1eaf5e6-7318-4473-8317-8a38fcca1fdc" containerName="route-controller-manager" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037475 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1eaf5e6-7318-4473-8317-8a38fcca1fdc" containerName="route-controller-manager" Feb 18 19:37:41 crc kubenswrapper[4932]: E0218 19:37:41.037485 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="registry-server" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037492 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="registry-server" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037578 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1eaf5e6-7318-4473-8317-8a38fcca1fdc" containerName="route-controller-manager" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037587 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="registry-server" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037947 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.078696 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6"] Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.086777 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-serving-cert\") pod \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.087214 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-client-ca\") pod \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.087254 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88l4b\" (UniqueName: \"kubernetes.io/projected/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-kube-api-access-88l4b\") pod \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.087303 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-config\") pod \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.087498 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42b9k\" (UniqueName: \"kubernetes.io/projected/cb823dd3-7026-4c20-8dec-73f24b23d9f5-kube-api-access-42b9k\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.087553 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb823dd3-7026-4c20-8dec-73f24b23d9f5-serving-cert\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.087604 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-client-ca\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.087651 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-config\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.089861 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-config" (OuterVolumeSpecName: "config") pod "d1eaf5e6-7318-4473-8317-8a38fcca1fdc" (UID: "d1eaf5e6-7318-4473-8317-8a38fcca1fdc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.090129 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-client-ca" (OuterVolumeSpecName: "client-ca") pod "d1eaf5e6-7318-4473-8317-8a38fcca1fdc" (UID: "d1eaf5e6-7318-4473-8317-8a38fcca1fdc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.092692 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d1eaf5e6-7318-4473-8317-8a38fcca1fdc" (UID: "d1eaf5e6-7318-4473-8317-8a38fcca1fdc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.092790 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-kube-api-access-88l4b" (OuterVolumeSpecName: "kube-api-access-88l4b") pod "d1eaf5e6-7318-4473-8317-8a38fcca1fdc" (UID: "d1eaf5e6-7318-4473-8317-8a38fcca1fdc"). InnerVolumeSpecName "kube-api-access-88l4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.122917 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.188404 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f701e3b-5068-423b-ae72-2097ca900619-serving-cert\") pod \"3f701e3b-5068-423b-ae72-2097ca900619\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.188491 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-config\") pod \"3f701e3b-5068-423b-ae72-2097ca900619\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.188554 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n7r6\" (UniqueName: \"kubernetes.io/projected/3f701e3b-5068-423b-ae72-2097ca900619-kube-api-access-5n7r6\") pod \"3f701e3b-5068-423b-ae72-2097ca900619\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.189338 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-client-ca\") pod \"3f701e3b-5068-423b-ae72-2097ca900619\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.189426 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-proxy-ca-bundles\") pod \"3f701e3b-5068-423b-ae72-2097ca900619\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.189534 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-config" (OuterVolumeSpecName: "config") pod "3f701e3b-5068-423b-ae72-2097ca900619" (UID: "3f701e3b-5068-423b-ae72-2097ca900619"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.189727 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42b9k\" (UniqueName: \"kubernetes.io/projected/cb823dd3-7026-4c20-8dec-73f24b23d9f5-kube-api-access-42b9k\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.189812 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb823dd3-7026-4c20-8dec-73f24b23d9f5-serving-cert\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.189909 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-client-ca\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.189987 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-config\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.190039 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.190054 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.190063 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88l4b\" (UniqueName: \"kubernetes.io/projected/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-kube-api-access-88l4b\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.190073 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.190081 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.190159 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-client-ca" (OuterVolumeSpecName: "client-ca") pod "3f701e3b-5068-423b-ae72-2097ca900619" (UID: "3f701e3b-5068-423b-ae72-2097ca900619"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.190730 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3f701e3b-5068-423b-ae72-2097ca900619" (UID: "3f701e3b-5068-423b-ae72-2097ca900619"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.191756 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-config\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.191873 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215a0eae-8c5b-4b0e-86f6-056bc6f696ff" path="/var/lib/kubelet/pods/215a0eae-8c5b-4b0e-86f6-056bc6f696ff/volumes" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.192497 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" path="/var/lib/kubelet/pods/29a4229b-f53b-4cd7-b81b-7fc2dfded045/volumes" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.193484 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f701e3b-5068-423b-ae72-2097ca900619-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3f701e3b-5068-423b-ae72-2097ca900619" (UID: "3f701e3b-5068-423b-ae72-2097ca900619"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.194245 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb823dd3-7026-4c20-8dec-73f24b23d9f5-serving-cert\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.194605 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-client-ca\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.199362 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f701e3b-5068-423b-ae72-2097ca900619-kube-api-access-5n7r6" (OuterVolumeSpecName: "kube-api-access-5n7r6") pod "3f701e3b-5068-423b-ae72-2097ca900619" (UID: "3f701e3b-5068-423b-ae72-2097ca900619"). InnerVolumeSpecName "kube-api-access-5n7r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.204075 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42b9k\" (UniqueName: \"kubernetes.io/projected/cb823dd3-7026-4c20-8dec-73f24b23d9f5-kube-api-access-42b9k\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.291036 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n7r6\" (UniqueName: \"kubernetes.io/projected/3f701e3b-5068-423b-ae72-2097ca900619-kube-api-access-5n7r6\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.291066 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.291077 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.291085 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f701e3b-5068-423b-ae72-2097ca900619-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.348936 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.392591 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" event={"ID":"a467e296-550a-46dd-b346-358df4c6ad1d","Type":"ContainerStarted","Data":"29791eced4e361d01f780e5edec4a59afe38a2a75a01024ccf3337cde6ebf796"} Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.394205 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.394242 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" event={"ID":"3f701e3b-5068-423b-ae72-2097ca900619","Type":"ContainerDied","Data":"ffc89283d0d1b05c0e852a5ff74828280f4f0bd46e1714810111665fecf8f740"} Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.394313 4932 scope.go:117] "RemoveContainer" containerID="07c51006436dce5c79f6a0ca9587b0474d8d3d2fbf6ac368abfe60f9fc273e20" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.407133 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" event={"ID":"d1eaf5e6-7318-4473-8317-8a38fcca1fdc","Type":"ContainerDied","Data":"ae3a5e90285132f6077bd152728f0c93ddc5e392da325f1bb2715b4f11c105b6"} Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.407215 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.428262 4932 scope.go:117] "RemoveContainer" containerID="1bde042b0eca7d25e70e8bc8f868a4af5d16d9cfdbec831cd7e71b1619585a03" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.444628 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-648d7854bd-2rffd"] Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.455831 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-648d7854bd-2rffd"] Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.468588 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw"] Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.469134 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw"] Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.550088 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6"] Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.658105 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg"] Feb 18 19:37:41 crc kubenswrapper[4932]: E0218 19:37:41.658318 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f701e3b-5068-423b-ae72-2097ca900619" containerName="controller-manager" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.658330 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f701e3b-5068-423b-ae72-2097ca900619" containerName="controller-manager" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.658426 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f701e3b-5068-423b-ae72-2097ca900619" containerName="controller-manager" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.658758 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.663868 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.664652 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.664666 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.664732 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.664763 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.665612 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.679308 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg"] Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.680639 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.694732 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-client-ca\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.694795 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-proxy-ca-bundles\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.694866 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmc2p\" (UniqueName: \"kubernetes.io/projected/a10acd9d-2f5c-41c0-b221-65865fe30829-kube-api-access-vmc2p\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.694898 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-config\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.694925 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a10acd9d-2f5c-41c0-b221-65865fe30829-serving-cert\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.795867 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a10acd9d-2f5c-41c0-b221-65865fe30829-serving-cert\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.795938 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-client-ca\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.795976 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-proxy-ca-bundles\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.796046 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmc2p\" (UniqueName: \"kubernetes.io/projected/a10acd9d-2f5c-41c0-b221-65865fe30829-kube-api-access-vmc2p\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.796079 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-config\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.798000 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-config\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.798129 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-client-ca\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.799407 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-proxy-ca-bundles\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.804047 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a10acd9d-2f5c-41c0-b221-65865fe30829-serving-cert\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.825424 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmc2p\" (UniqueName: \"kubernetes.io/projected/a10acd9d-2f5c-41c0-b221-65865fe30829-kube-api-access-vmc2p\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.996205 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.368477 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwwjl"] Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.368685 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vwwjl" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="registry-server" containerID="cri-o://2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5" gracePeriod=2 Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.409435 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg"] Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.441872 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" event={"ID":"a467e296-550a-46dd-b346-358df4c6ad1d","Type":"ContainerStarted","Data":"ea3e484f416ba68906069e5b3fb84a68ee488f726d61911dc19a9be43ad02a1a"} Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.442432 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.445638 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" event={"ID":"cb823dd3-7026-4c20-8dec-73f24b23d9f5","Type":"ContainerStarted","Data":"a892711fb47ba6b4bcbbb8ec95473d5a4d1c5058339cd6e0916c9dd0e3c0a2ca"} Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.445672 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" event={"ID":"cb823dd3-7026-4c20-8dec-73f24b23d9f5","Type":"ContainerStarted","Data":"e2a8883038eeab43da38d5bcf9fb3ee3f03931e9147fd7652ed3b803d8e18880"} Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.446390 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.448886 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.467876 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" podStartSLOduration=28.467857165 podStartE2EDuration="28.467857165s" podCreationTimestamp="2026-02-18 19:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:42.466767028 +0000 UTC m=+226.048721873" watchObservedRunningTime="2026-02-18 19:37:42.467857165 +0000 UTC m=+226.049812010" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.487779 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" podStartSLOduration=3.487764447 podStartE2EDuration="3.487764447s" podCreationTimestamp="2026-02-18 19:37:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:42.484935919 +0000 UTC m=+226.066890764" watchObservedRunningTime="2026-02-18 19:37:42.487764447 +0000 UTC m=+226.069719292" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.567343 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.879916 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.911706 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-utilities\") pod \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.911749 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-522zn\" (UniqueName: \"kubernetes.io/projected/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-kube-api-access-522zn\") pod \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.911868 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-catalog-content\") pod \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.912628 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-utilities" (OuterVolumeSpecName: "utilities") pod "83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" (UID: "83fa5ba7-c2d8-4d68-839f-ba2f4cad568a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.921366 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-kube-api-access-522zn" (OuterVolumeSpecName: "kube-api-access-522zn") pod "83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" (UID: "83fa5ba7-c2d8-4d68-839f-ba2f4cad568a"). InnerVolumeSpecName "kube-api-access-522zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.939427 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" (UID: "83fa5ba7-c2d8-4d68-839f-ba2f4cad568a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.012740 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.012776 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.012787 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-522zn\" (UniqueName: \"kubernetes.io/projected/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-kube-api-access-522zn\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.184965 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f701e3b-5068-423b-ae72-2097ca900619" path="/var/lib/kubelet/pods/3f701e3b-5068-423b-ae72-2097ca900619/volumes" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.185609 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1eaf5e6-7318-4473-8317-8a38fcca1fdc" path="/var/lib/kubelet/pods/d1eaf5e6-7318-4473-8317-8a38fcca1fdc/volumes" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.453056 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" event={"ID":"a10acd9d-2f5c-41c0-b221-65865fe30829","Type":"ContainerStarted","Data":"ab88a41d874ce61f48b43b162e1cf7bb6c2c2fa42ca34ca8edd7d29c53a71c40"} Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.453104 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" event={"ID":"a10acd9d-2f5c-41c0-b221-65865fe30829","Type":"ContainerStarted","Data":"9884fc5b935e7ec29f1fa3ab7fe35eb2cbfe8ccdcca7c00b3c99f77fb62e0b75"} Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.453520 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.455817 4932 generic.go:334] "Generic (PLEG): container finished" podID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerID="2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5" exitCode=0 Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.455882 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwwjl" event={"ID":"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a","Type":"ContainerDied","Data":"2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5"} Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.455926 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.455959 4932 scope.go:117] "RemoveContainer" containerID="2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.455940 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwwjl" event={"ID":"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a","Type":"ContainerDied","Data":"aa8524bb79cb00bc572889b14100dbb8df53c65222c30b9f755ec3035f0dbea0"} Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.463410 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.483967 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" podStartSLOduration=4.483943569 podStartE2EDuration="4.483943569s" podCreationTimestamp="2026-02-18 19:37:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:43.477575935 +0000 UTC m=+227.059530820" watchObservedRunningTime="2026-02-18 19:37:43.483943569 +0000 UTC m=+227.065898424" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.484779 4932 scope.go:117] "RemoveContainer" containerID="e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.516014 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwwjl"] Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.516132 4932 scope.go:117] "RemoveContainer" containerID="edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.521441 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwwjl"] Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.544226 4932 scope.go:117] "RemoveContainer" containerID="2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5" Feb 18 19:37:43 crc kubenswrapper[4932]: E0218 19:37:43.544719 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5\": container with ID starting with 2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5 not found: ID does not exist" containerID="2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.544774 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5"} err="failed to get container status \"2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5\": rpc error: code = NotFound desc = could not find container \"2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5\": container with ID starting with 2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5 not found: ID does not exist" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.544816 4932 scope.go:117] "RemoveContainer" containerID="e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0" Feb 18 19:37:43 crc kubenswrapper[4932]: E0218 19:37:43.545097 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0\": container with ID starting with e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0 not found: ID does not exist" containerID="e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.545142 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0"} err="failed to get container status \"e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0\": rpc error: code = NotFound desc = could not find container \"e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0\": container with ID starting with e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0 not found: ID does not exist" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.545167 4932 scope.go:117] "RemoveContainer" containerID="edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb" Feb 18 19:37:43 crc kubenswrapper[4932]: E0218 19:37:43.545603 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb\": container with ID starting with edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb not found: ID does not exist" containerID="edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.545656 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb"} err="failed to get container status \"edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb\": rpc error: code = NotFound desc = could not find container \"edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb\": container with ID starting with edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb not found: ID does not exist" Feb 18 19:37:44 crc kubenswrapper[4932]: I0218 19:37:44.768876 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-78d5s"] Feb 18 19:37:44 crc kubenswrapper[4932]: I0218 19:37:44.769091 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-78d5s" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="registry-server" containerID="cri-o://d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531" gracePeriod=2 Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.188394 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" path="/var/lib/kubelet/pods/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a/volumes" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.201390 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.249091 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-utilities\") pod \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.249141 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-catalog-content\") pod \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.249226 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5ks5\" (UniqueName: \"kubernetes.io/projected/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-kube-api-access-h5ks5\") pod \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.250144 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-utilities" (OuterVolumeSpecName: "utilities") pod "2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" (UID: "2483e7fb-5cc5-4715-8eea-fd5cf6b31d75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.259493 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-kube-api-access-h5ks5" (OuterVolumeSpecName: "kube-api-access-h5ks5") pod "2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" (UID: "2483e7fb-5cc5-4715-8eea-fd5cf6b31d75"). InnerVolumeSpecName "kube-api-access-h5ks5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.350468 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.350517 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5ks5\" (UniqueName: \"kubernetes.io/projected/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-kube-api-access-h5ks5\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.375801 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" (UID: "2483e7fb-5cc5-4715-8eea-fd5cf6b31d75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.452322 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.488411 4932 generic.go:334] "Generic (PLEG): container finished" podID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerID="d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531" exitCode=0 Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.488479 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.488595 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78d5s" event={"ID":"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75","Type":"ContainerDied","Data":"d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531"} Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.488634 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78d5s" event={"ID":"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75","Type":"ContainerDied","Data":"d4b4e12432d81a20c3a5774755df782409b7a4c04cd3667ffe8f283572befe4d"} Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.488725 4932 scope.go:117] "RemoveContainer" containerID="d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.524690 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-78d5s"] Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.527689 4932 scope.go:117] "RemoveContainer" containerID="0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.528746 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-78d5s"] Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.543076 4932 scope.go:117] "RemoveContainer" containerID="4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.567225 4932 scope.go:117] "RemoveContainer" containerID="d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531" Feb 18 19:37:45 crc kubenswrapper[4932]: E0218 19:37:45.567829 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531\": container with ID starting with d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531 not found: ID does not exist" containerID="d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.568577 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531"} err="failed to get container status \"d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531\": rpc error: code = NotFound desc = could not find container \"d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531\": container with ID starting with d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531 not found: ID does not exist" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.568619 4932 scope.go:117] "RemoveContainer" containerID="0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af" Feb 18 19:37:45 crc kubenswrapper[4932]: E0218 19:37:45.568974 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af\": container with ID starting with 0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af not found: ID does not exist" containerID="0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.569006 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af"} err="failed to get container status \"0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af\": rpc error: code = NotFound desc = could not find container \"0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af\": container with ID starting with 0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af not found: ID does not exist" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.569087 4932 scope.go:117] "RemoveContainer" containerID="4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648" Feb 18 19:37:45 crc kubenswrapper[4932]: E0218 19:37:45.570467 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648\": container with ID starting with 4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648 not found: ID does not exist" containerID="4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.570498 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648"} err="failed to get container status \"4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648\": rpc error: code = NotFound desc = could not find container \"4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648\": container with ID starting with 4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648 not found: ID does not exist" Feb 18 19:37:47 crc kubenswrapper[4932]: I0218 19:37:47.187049 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" path="/var/lib/kubelet/pods/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75/volumes" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.255082 4932 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.255968 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="extract-utilities" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.255994 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="extract-utilities" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.256012 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="extract-content" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.256025 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="extract-content" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.256056 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="extract-utilities" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.256068 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="extract-utilities" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.256088 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="registry-server" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.256101 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="registry-server" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.256121 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="extract-content" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.256134 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="extract-content" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.256156 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="registry-server" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.256200 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="registry-server" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.256387 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="registry-server" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.256428 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="registry-server" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.257037 4932 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.257287 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.257682 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7" gracePeriod=15 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.257750 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0" gracePeriod=15 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.257797 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18" gracePeriod=15 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.257715 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5" gracePeriod=15 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.257891 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601" gracePeriod=15 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.259120 4932 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.259429 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.259457 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.259525 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.259546 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.260826 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.260844 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.260866 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.260880 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.260898 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.260913 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.260930 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.260942 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.261167 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.261232 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.261251 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.261270 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.261285 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.261308 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.261513 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.261528 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.349986 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.350685 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.350729 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.350752 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.350801 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.350838 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.350959 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.351299 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452426 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452508 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452543 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452554 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452605 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452636 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452649 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452668 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452711 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452687 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452789 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452834 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452864 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452983 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.453030 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.453070 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.546198 4932 generic.go:334] "Generic (PLEG): container finished" podID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" containerID="89f0774e9a169a85e00453d4419c3e930e811396c9527b57c8e29093ef32ec9f" exitCode=0 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.546318 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b38b0e86-4a7b-4436-a0ef-565a61a1eab4","Type":"ContainerDied","Data":"89f0774e9a169a85e00453d4419c3e930e811396c9527b57c8e29093ef32ec9f"} Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.547420 4932 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.547850 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.549775 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.551691 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.552737 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5" exitCode=0 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.552773 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601" exitCode=0 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.552788 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0" exitCode=0 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.552802 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18" exitCode=2 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.552854 4932 scope.go:117] "RemoveContainer" containerID="5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203" Feb 18 19:37:53 crc kubenswrapper[4932]: I0218 19:37:53.563983 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:37:53 crc kubenswrapper[4932]: I0218 19:37:53.969151 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:53 crc kubenswrapper[4932]: I0218 19:37:53.970446 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.080673 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kube-api-access\") pod \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.081149 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-var-lock\") pod \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.081402 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kubelet-dir\") pod \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.081468 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-var-lock" (OuterVolumeSpecName: "var-lock") pod "b38b0e86-4a7b-4436-a0ef-565a61a1eab4" (UID: "b38b0e86-4a7b-4436-a0ef-565a61a1eab4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.081521 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b38b0e86-4a7b-4436-a0ef-565a61a1eab4" (UID: "b38b0e86-4a7b-4436-a0ef-565a61a1eab4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.089858 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b38b0e86-4a7b-4436-a0ef-565a61a1eab4" (UID: "b38b0e86-4a7b-4436-a0ef-565a61a1eab4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.183399 4932 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.183846 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.184032 4932 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.571810 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b38b0e86-4a7b-4436-a0ef-565a61a1eab4","Type":"ContainerDied","Data":"47e12ed4376656b94af9a3460a8df57cde49986c200ba6e60e8d0c9fbcd288a4"} Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.572068 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.572084 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47e12ed4376656b94af9a3460a8df57cde49986c200ba6e60e8d0c9fbcd288a4" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.653369 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.657116 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.658080 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.658541 4932 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.659068 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.703220 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.703322 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.703357 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.703384 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.703414 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.703581 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.704010 4932 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.704043 4932 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.704062 4932 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.187380 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.583807 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.584994 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7" exitCode=0 Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.585099 4932 scope.go:117] "RemoveContainer" containerID="982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.585140 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.586030 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.587796 4932 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.590905 4932 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.591378 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.613152 4932 scope.go:117] "RemoveContainer" containerID="376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.642013 4932 scope.go:117] "RemoveContainer" containerID="f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.670115 4932 scope.go:117] "RemoveContainer" containerID="58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.692037 4932 scope.go:117] "RemoveContainer" containerID="4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.719410 4932 scope.go:117] "RemoveContainer" containerID="8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.748318 4932 scope.go:117] "RemoveContainer" containerID="982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5" Feb 18 19:37:55 crc kubenswrapper[4932]: E0218 19:37:55.748904 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\": container with ID starting with 982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5 not found: ID does not exist" containerID="982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.748981 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5"} err="failed to get container status \"982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\": rpc error: code = NotFound desc = could not find container \"982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\": container with ID starting with 982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5 not found: ID does not exist" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.749058 4932 scope.go:117] "RemoveContainer" containerID="376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601" Feb 18 19:37:55 crc kubenswrapper[4932]: E0218 19:37:55.749555 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\": container with ID starting with 376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601 not found: ID does not exist" containerID="376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.749640 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601"} err="failed to get container status \"376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\": rpc error: code = NotFound desc = could not find container \"376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\": container with ID starting with 376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601 not found: ID does not exist" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.749693 4932 scope.go:117] "RemoveContainer" containerID="f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0" Feb 18 19:37:55 crc kubenswrapper[4932]: E0218 19:37:55.750329 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\": container with ID starting with f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0 not found: ID does not exist" containerID="f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.750408 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0"} err="failed to get container status \"f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\": rpc error: code = NotFound desc = could not find container \"f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\": container with ID starting with f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0 not found: ID does not exist" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.750452 4932 scope.go:117] "RemoveContainer" containerID="58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18" Feb 18 19:37:55 crc kubenswrapper[4932]: E0218 19:37:55.750892 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\": container with ID starting with 58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18 not found: ID does not exist" containerID="58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.751063 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18"} err="failed to get container status \"58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\": rpc error: code = NotFound desc = could not find container \"58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\": container with ID starting with 58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18 not found: ID does not exist" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.751124 4932 scope.go:117] "RemoveContainer" containerID="4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7" Feb 18 19:37:55 crc kubenswrapper[4932]: E0218 19:37:55.752396 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\": container with ID starting with 4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7 not found: ID does not exist" containerID="4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.752448 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7"} err="failed to get container status \"4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\": rpc error: code = NotFound desc = could not find container \"4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\": container with ID starting with 4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7 not found: ID does not exist" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.752478 4932 scope.go:117] "RemoveContainer" containerID="8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c" Feb 18 19:37:55 crc kubenswrapper[4932]: E0218 19:37:55.752798 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\": container with ID starting with 8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c not found: ID does not exist" containerID="8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.752838 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c"} err="failed to get container status \"8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\": rpc error: code = NotFound desc = could not find container \"8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\": container with ID starting with 8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c not found: ID does not exist" Feb 18 19:37:57 crc kubenswrapper[4932]: I0218 19:37:57.184356 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:57 crc kubenswrapper[4932]: I0218 19:37:57.185277 4932 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.303940 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.304798 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.305366 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.305647 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.306123 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:57 crc kubenswrapper[4932]: I0218 19:37:57.306218 4932 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.306818 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="200ms" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.306951 4932 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:57 crc kubenswrapper[4932]: I0218 19:37:57.307714 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:57 crc kubenswrapper[4932]: W0218 19:37:57.358783 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-0b3e7e780e57c42bcfa82fdecd5974328d8cc9b994116dc41c1fe40e18053785 WatchSource:0}: Error finding container 0b3e7e780e57c42bcfa82fdecd5974328d8cc9b994116dc41c1fe40e18053785: Status 404 returned error can't find the container with id 0b3e7e780e57c42bcfa82fdecd5974328d8cc9b994116dc41c1fe40e18053785 Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.362030 4932 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18956e7507d0b720 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:37:57.36150608 +0000 UTC m=+240.943460955,LastTimestamp:2026-02-18 19:37:57.36150608 +0000 UTC m=+240.943460955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.508251 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="400ms" Feb 18 19:37:57 crc kubenswrapper[4932]: I0218 19:37:57.604313 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0b3e7e780e57c42bcfa82fdecd5974328d8cc9b994116dc41c1fe40e18053785"} Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.909630 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="800ms" Feb 18 19:37:58 crc kubenswrapper[4932]: I0218 19:37:58.615433 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1"} Feb 18 19:37:58 crc kubenswrapper[4932]: I0218 19:37:58.616414 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:58 crc kubenswrapper[4932]: E0218 19:37:58.616441 4932 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:58 crc kubenswrapper[4932]: E0218 19:37:58.711266 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="1.6s" Feb 18 19:37:59 crc kubenswrapper[4932]: E0218 19:37:59.621574 4932 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:38:00 crc kubenswrapper[4932]: E0218 19:38:00.312825 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="3.2s" Feb 18 19:38:02 crc kubenswrapper[4932]: E0218 19:38:02.710335 4932 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18956e7507d0b720 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:37:57.36150608 +0000 UTC m=+240.943460955,LastTimestamp:2026-02-18 19:37:57.36150608 +0000 UTC m=+240.943460955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:38:03 crc kubenswrapper[4932]: E0218 19:38:03.514088 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="6.4s" Feb 18 19:38:05 crc kubenswrapper[4932]: I0218 19:38:05.663834 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 19:38:05 crc kubenswrapper[4932]: I0218 19:38:05.663959 4932 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04" exitCode=1 Feb 18 19:38:05 crc kubenswrapper[4932]: I0218 19:38:05.664035 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04"} Feb 18 19:38:05 crc kubenswrapper[4932]: I0218 19:38:05.665135 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:05 crc kubenswrapper[4932]: I0218 19:38:05.665225 4932 scope.go:117] "RemoveContainer" containerID="fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04" Feb 18 19:38:05 crc kubenswrapper[4932]: I0218 19:38:05.665466 4932 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:06 crc kubenswrapper[4932]: I0218 19:38:06.672220 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 19:38:06 crc kubenswrapper[4932]: I0218 19:38:06.672537 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c663158c78d673cd290435fe02306d2e388eabe920f2c0971d83cb4233a2dacc"} Feb 18 19:38:06 crc kubenswrapper[4932]: I0218 19:38:06.673508 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:06 crc kubenswrapper[4932]: I0218 19:38:06.674161 4932 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.178354 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.182975 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.184146 4932 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.184892 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.185589 4932 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.198671 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.198885 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:07 crc kubenswrapper[4932]: E0218 19:38:07.199695 4932 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.200426 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:07 crc kubenswrapper[4932]: W0218 19:38:07.215710 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-9952aa2dd3a7067664fc6a731e493a8fc388bf1f077cb33e1512468a4acbc63d WatchSource:0}: Error finding container 9952aa2dd3a7067664fc6a731e493a8fc388bf1f077cb33e1512468a4acbc63d: Status 404 returned error can't find the container with id 9952aa2dd3a7067664fc6a731e493a8fc388bf1f077cb33e1512468a4acbc63d Feb 18 19:38:07 crc kubenswrapper[4932]: E0218 19:38:07.264357 4932 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" volumeName="registry-storage" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.309998 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.681751 4932 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="605e4403cffb6c05afad1cfa84e897f679145191f00dfca26201582912b754c1" exitCode=0 Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.681884 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"605e4403cffb6c05afad1cfa84e897f679145191f00dfca26201582912b754c1"} Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.682296 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9952aa2dd3a7067664fc6a731e493a8fc388bf1f077cb33e1512468a4acbc63d"} Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.682898 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.682921 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:07 crc kubenswrapper[4932]: E0218 19:38:07.683418 4932 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.683806 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.684570 4932 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:08 crc kubenswrapper[4932]: I0218 19:38:08.688367 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a0ff61fcc5eea9d2b70ff6fa451420cd3c979ccb6b28474c592e07fe4b130d88"} Feb 18 19:38:08 crc kubenswrapper[4932]: I0218 19:38:08.688697 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f81ee2b20871ddeb6bd83602f9a8de8c9b70930668c50d3d1c77c00863cb4981"} Feb 18 19:38:09 crc kubenswrapper[4932]: I0218 19:38:09.703007 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d9450c3ad888f774bc49789dbc4275f929db36ed240c5858f77bb4305626022d"} Feb 18 19:38:09 crc kubenswrapper[4932]: I0218 19:38:09.703432 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"58049a2acaa78d458cd3a81eae7124d4f804f1b0475cc60e47542dc023ffa61a"} Feb 18 19:38:09 crc kubenswrapper[4932]: I0218 19:38:09.703441 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"73f5d1cc935e097385509033bc5b8ec515214e4b7af0c8bf77c780ad703090dd"} Feb 18 19:38:09 crc kubenswrapper[4932]: I0218 19:38:09.703454 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:09 crc kubenswrapper[4932]: I0218 19:38:09.703548 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:09 crc kubenswrapper[4932]: I0218 19:38:09.703579 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:10 crc kubenswrapper[4932]: I0218 19:38:10.655361 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:10 crc kubenswrapper[4932]: I0218 19:38:10.660086 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:12 crc kubenswrapper[4932]: I0218 19:38:12.201386 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:12 crc kubenswrapper[4932]: I0218 19:38:12.201770 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:12 crc kubenswrapper[4932]: I0218 19:38:12.210005 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:14 crc kubenswrapper[4932]: I0218 19:38:14.717486 4932 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:15 crc kubenswrapper[4932]: I0218 19:38:15.744491 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:15 crc kubenswrapper[4932]: I0218 19:38:15.745012 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:15 crc kubenswrapper[4932]: I0218 19:38:15.749234 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:16 crc kubenswrapper[4932]: I0218 19:38:16.749811 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:16 crc kubenswrapper[4932]: I0218 19:38:16.749848 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:17 crc kubenswrapper[4932]: I0218 19:38:17.202709 4932 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c172ac02-824c-482f-b659-1338ee76566a" Feb 18 19:38:17 crc kubenswrapper[4932]: I0218 19:38:17.315554 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:23 crc kubenswrapper[4932]: I0218 19:38:23.855763 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 19:38:23 crc kubenswrapper[4932]: I0218 19:38:23.939759 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 19:38:24 crc kubenswrapper[4932]: I0218 19:38:24.314961 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 19:38:24 crc kubenswrapper[4932]: I0218 19:38:24.955830 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.192161 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.214201 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.215419 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.270979 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.554362 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.909494 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.962431 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.992358 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.133079 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.153408 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.282251 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.444698 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.461228 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.503653 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.527154 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.530767 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.693026 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.939681 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.084743 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.098698 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.116884 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.233086 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.312539 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.385146 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.466589 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.638271 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.814671 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.851909 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.911612 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.934072 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.115835 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.166790 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.176629 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.486065 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.494069 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.511853 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.626756 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.654041 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.677685 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.735652 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.798346 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.809563 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.999283 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.043451 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.064149 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.117519 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.169065 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.213534 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.316360 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.388228 4932 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.414453 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.451271 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.508799 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.516377 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.520112 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.561955 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.610999 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.647527 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.659491 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.680285 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.703922 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.818838 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.994474 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.193138 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.236085 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.295212 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.314144 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.393104 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.590705 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.602285 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.633467 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.667216 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.707133 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.712618 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.741437 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.749648 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.758808 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.819709 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.923170 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.022976 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.153625 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.154495 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.218563 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.283614 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.377278 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.411033 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.438537 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.451744 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.526766 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.529077 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.551155 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.556967 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.575608 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.809136 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.877936 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.904735 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.130125 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.153142 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.210274 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.276737 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.304679 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.391412 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.412073 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.461409 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.489802 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.532984 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.560427 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.562714 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.584587 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.594794 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.632075 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.638464 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.661890 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.760444 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.780246 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.891750 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.935937 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.057931 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.095026 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.118266 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.137640 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.151368 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.190921 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.214207 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.250061 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.349207 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.537249 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.593772 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.596709 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.659042 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.675271 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.692531 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.693302 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.814251 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.815755 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.915629 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.980535 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.006758 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.047595 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.056429 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.097533 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.139344 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.194288 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.199405 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.204122 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.219313 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.345381 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.413567 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.417491 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.425532 4932 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.430378 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.430461 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.430767 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.430794 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.434481 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.449943 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.44991689 podStartE2EDuration="20.44991689s" podCreationTimestamp="2026-02-18 19:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:34.445838151 +0000 UTC m=+278.027792996" watchObservedRunningTime="2026-02-18 19:38:34.44991689 +0000 UTC m=+278.031871755" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.456734 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.459557 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.519187 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.531429 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.548495 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.623845 4932 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.634715 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.754348 4932 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.822146 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.901598 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.933212 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.092891 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.121877 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.175256 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.191502 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.197725 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.200156 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.211835 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.215377 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.300553 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.306313 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.416645 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.433284 4932 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.580202 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.784219 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.854022 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.886960 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.888001 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.926079 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.932908 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.002952 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.006374 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.034564 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.047854 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.103076 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.231670 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.335367 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.340101 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.353014 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.413699 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.432829 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.445388 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.469762 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.512215 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.557513 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.608099 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.742616 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.805371 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.845311 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.852018 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.900702 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.918563 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.933244 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.040638 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.157966 4932 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.158339 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1" gracePeriod=5 Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.189339 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.192840 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.243615 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.290691 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.398576 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.400767 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.402422 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.595722 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.809626 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.886944 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.893366 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.097860 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.116030 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.238231 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.365527 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.380714 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.578545 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.585150 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.595569 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.736109 4932 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.768816 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.837416 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.987259 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.052363 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.161242 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.184354 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.388480 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.465628 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.484677 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.525618 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.551850 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.567429 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.642596 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.755115 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6"] Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.755927 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" podUID="cb823dd3-7026-4c20-8dec-73f24b23d9f5" containerName="route-controller-manager" containerID="cri-o://a892711fb47ba6b4bcbbb8ec95473d5a4d1c5058339cd6e0916c9dd0e3c0a2ca" gracePeriod=30 Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.760489 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg"] Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.760760 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" podUID="a10acd9d-2f5c-41c0-b221-65865fe30829" containerName="controller-manager" containerID="cri-o://ab88a41d874ce61f48b43b162e1cf7bb6c2c2fa42ca34ca8edd7d29c53a71c40" gracePeriod=30 Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.914707 4932 generic.go:334] "Generic (PLEG): container finished" podID="a10acd9d-2f5c-41c0-b221-65865fe30829" containerID="ab88a41d874ce61f48b43b162e1cf7bb6c2c2fa42ca34ca8edd7d29c53a71c40" exitCode=0 Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.914761 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" event={"ID":"a10acd9d-2f5c-41c0-b221-65865fe30829","Type":"ContainerDied","Data":"ab88a41d874ce61f48b43b162e1cf7bb6c2c2fa42ca34ca8edd7d29c53a71c40"} Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.916082 4932 generic.go:334] "Generic (PLEG): container finished" podID="cb823dd3-7026-4c20-8dec-73f24b23d9f5" containerID="a892711fb47ba6b4bcbbb8ec95473d5a4d1c5058339cd6e0916c9dd0e3c0a2ca" exitCode=0 Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.916193 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" event={"ID":"cb823dd3-7026-4c20-8dec-73f24b23d9f5","Type":"ContainerDied","Data":"a892711fb47ba6b4bcbbb8ec95473d5a4d1c5058339cd6e0916c9dd0e3c0a2ca"} Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.080828 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.184029 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.235021 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.240957 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.248217 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-config\") pod \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.248498 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-client-ca\") pod \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.248657 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42b9k\" (UniqueName: \"kubernetes.io/projected/cb823dd3-7026-4c20-8dec-73f24b23d9f5-kube-api-access-42b9k\") pod \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.248768 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb823dd3-7026-4c20-8dec-73f24b23d9f5-serving-cert\") pod \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.249103 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "cb823dd3-7026-4c20-8dec-73f24b23d9f5" (UID: "cb823dd3-7026-4c20-8dec-73f24b23d9f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.249146 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-config" (OuterVolumeSpecName: "config") pod "cb823dd3-7026-4c20-8dec-73f24b23d9f5" (UID: "cb823dd3-7026-4c20-8dec-73f24b23d9f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.253856 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb823dd3-7026-4c20-8dec-73f24b23d9f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cb823dd3-7026-4c20-8dec-73f24b23d9f5" (UID: "cb823dd3-7026-4c20-8dec-73f24b23d9f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.260341 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb823dd3-7026-4c20-8dec-73f24b23d9f5-kube-api-access-42b9k" (OuterVolumeSpecName: "kube-api-access-42b9k") pod "cb823dd3-7026-4c20-8dec-73f24b23d9f5" (UID: "cb823dd3-7026-4c20-8dec-73f24b23d9f5"). InnerVolumeSpecName "kube-api-access-42b9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.288139 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.293431 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.325412 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349326 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-config\") pod \"a10acd9d-2f5c-41c0-b221-65865fe30829\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349378 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-proxy-ca-bundles\") pod \"a10acd9d-2f5c-41c0-b221-65865fe30829\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349410 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a10acd9d-2f5c-41c0-b221-65865fe30829-serving-cert\") pod \"a10acd9d-2f5c-41c0-b221-65865fe30829\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349446 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-client-ca\") pod \"a10acd9d-2f5c-41c0-b221-65865fe30829\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349489 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmc2p\" (UniqueName: \"kubernetes.io/projected/a10acd9d-2f5c-41c0-b221-65865fe30829-kube-api-access-vmc2p\") pod \"a10acd9d-2f5c-41c0-b221-65865fe30829\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349652 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349663 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42b9k\" (UniqueName: \"kubernetes.io/projected/cb823dd3-7026-4c20-8dec-73f24b23d9f5-kube-api-access-42b9k\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349673 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb823dd3-7026-4c20-8dec-73f24b23d9f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349681 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.350629 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a10acd9d-2f5c-41c0-b221-65865fe30829" (UID: "a10acd9d-2f5c-41c0-b221-65865fe30829"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.350663 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-client-ca" (OuterVolumeSpecName: "client-ca") pod "a10acd9d-2f5c-41c0-b221-65865fe30829" (UID: "a10acd9d-2f5c-41c0-b221-65865fe30829"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.350697 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-config" (OuterVolumeSpecName: "config") pod "a10acd9d-2f5c-41c0-b221-65865fe30829" (UID: "a10acd9d-2f5c-41c0-b221-65865fe30829"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.353870 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10acd9d-2f5c-41c0-b221-65865fe30829-kube-api-access-vmc2p" (OuterVolumeSpecName: "kube-api-access-vmc2p") pod "a10acd9d-2f5c-41c0-b221-65865fe30829" (UID: "a10acd9d-2f5c-41c0-b221-65865fe30829"). InnerVolumeSpecName "kube-api-access-vmc2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.354197 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10acd9d-2f5c-41c0-b221-65865fe30829-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a10acd9d-2f5c-41c0-b221-65865fe30829" (UID: "a10acd9d-2f5c-41c0-b221-65865fe30829"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.372531 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.405715 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.451631 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.451678 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.451698 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a10acd9d-2f5c-41c0-b221-65865fe30829-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.451716 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.451736 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmc2p\" (UniqueName: \"kubernetes.io/projected/a10acd9d-2f5c-41c0-b221-65865fe30829-kube-api-access-vmc2p\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.925317 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" event={"ID":"cb823dd3-7026-4c20-8dec-73f24b23d9f5","Type":"ContainerDied","Data":"e2a8883038eeab43da38d5bcf9fb3ee3f03931e9147fd7652ed3b803d8e18880"} Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.927037 4932 scope.go:117] "RemoveContainer" containerID="a892711fb47ba6b4bcbbb8ec95473d5a4d1c5058339cd6e0916c9dd0e3c0a2ca" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.925821 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.928524 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" event={"ID":"a10acd9d-2f5c-41c0-b221-65865fe30829","Type":"ContainerDied","Data":"9884fc5b935e7ec29f1fa3ab7fe35eb2cbfe8ccdcca7c00b3c99f77fb62e0b75"} Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.928625 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.988718 4932 scope.go:117] "RemoveContainer" containerID="ab88a41d874ce61f48b43b162e1cf7bb6c2c2fa42ca34ca8edd7d29c53a71c40" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.005998 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg"] Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.015204 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg"] Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.024644 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6"] Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.031366 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6"] Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.197624 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a10acd9d-2f5c-41c0-b221-65865fe30829" path="/var/lib/kubelet/pods/a10acd9d-2f5c-41c0-b221-65865fe30829/volumes" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.198827 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb823dd3-7026-4c20-8dec-73f24b23d9f5" path="/var/lib/kubelet/pods/cb823dd3-7026-4c20-8dec-73f24b23d9f5/volumes" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.357653 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.708582 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-4mdtv"] Feb 18 19:38:41 crc kubenswrapper[4932]: E0218 19:38:41.708954 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10acd9d-2f5c-41c0-b221-65865fe30829" containerName="controller-manager" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.708991 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10acd9d-2f5c-41c0-b221-65865fe30829" containerName="controller-manager" Feb 18 19:38:41 crc kubenswrapper[4932]: E0218 19:38:41.709030 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb823dd3-7026-4c20-8dec-73f24b23d9f5" containerName="route-controller-manager" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.709050 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb823dd3-7026-4c20-8dec-73f24b23d9f5" containerName="route-controller-manager" Feb 18 19:38:41 crc kubenswrapper[4932]: E0218 19:38:41.709076 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" containerName="installer" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.709092 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" containerName="installer" Feb 18 19:38:41 crc kubenswrapper[4932]: E0218 19:38:41.709119 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.709136 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.709397 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10acd9d-2f5c-41c0-b221-65865fe30829" containerName="controller-manager" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.709427 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.709455 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" containerName="installer" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.709472 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb823dd3-7026-4c20-8dec-73f24b23d9f5" containerName="route-controller-manager" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.710223 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.719923 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.720559 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.720809 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.721416 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.722269 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.722714 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.726397 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79"] Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.727579 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.732637 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.734744 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.734867 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.735078 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.735366 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.736405 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.740270 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.756050 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79"] Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.760021 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-4mdtv"] Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771006 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-proxy-ca-bundles\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771097 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c9c4a73-3821-4c75-a01c-d7f77444ff45-serving-cert\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771146 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-config\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771225 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-config\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771282 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2gkf\" (UniqueName: \"kubernetes.io/projected/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-kube-api-access-q2gkf\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771361 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-serving-cert\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771402 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-client-ca\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771434 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-client-ca\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771481 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxtnh\" (UniqueName: \"kubernetes.io/projected/3c9c4a73-3821-4c75-a01c-d7f77444ff45-kube-api-access-fxtnh\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873093 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-serving-cert\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873152 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-client-ca\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873197 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-client-ca\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873232 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxtnh\" (UniqueName: \"kubernetes.io/projected/3c9c4a73-3821-4c75-a01c-d7f77444ff45-kube-api-access-fxtnh\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873276 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-proxy-ca-bundles\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873304 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c9c4a73-3821-4c75-a01c-d7f77444ff45-serving-cert\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873327 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-config\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873347 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-config\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873376 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2gkf\" (UniqueName: \"kubernetes.io/projected/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-kube-api-access-q2gkf\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.874852 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-proxy-ca-bundles\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.875609 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-client-ca\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.875616 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-config\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.875938 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-client-ca\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.876791 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-config\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.893920 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c9c4a73-3821-4c75-a01c-d7f77444ff45-serving-cert\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.895906 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2gkf\" (UniqueName: \"kubernetes.io/projected/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-kube-api-access-q2gkf\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.897481 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-serving-cert\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.904927 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxtnh\" (UniqueName: \"kubernetes.io/projected/3c9c4a73-3821-4c75-a01c-d7f77444ff45-kube-api-access-fxtnh\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.036302 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.059322 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.338041 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-4mdtv"] Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.378470 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79"] Feb 18 19:38:42 crc kubenswrapper[4932]: W0218 19:38:42.385128 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dd6288c_f4b2_4b2d_aef1_d0c604f6b8b7.slice/crio-b9119ccfa07aee7ec222b7c0517d5bad3a2004de4cece9029bb0c05347adb1be WatchSource:0}: Error finding container b9119ccfa07aee7ec222b7c0517d5bad3a2004de4cece9029bb0c05347adb1be: Status 404 returned error can't find the container with id b9119ccfa07aee7ec222b7c0517d5bad3a2004de4cece9029bb0c05347adb1be Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.723217 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.723292 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.784897 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.784947 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785027 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785060 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785049 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785122 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785139 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785163 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785283 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785426 4932 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785443 4932 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785453 4932 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785462 4932 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.792420 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.886943 4932 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.944806 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" event={"ID":"3c9c4a73-3821-4c75-a01c-d7f77444ff45","Type":"ContainerStarted","Data":"a34f9b69ea1d5920344caa95aa69b9994f98be7d6289cf2c6072102aa51e67e5"} Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.944847 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" event={"ID":"3c9c4a73-3821-4c75-a01c-d7f77444ff45","Type":"ContainerStarted","Data":"2a742d73160052609d346519668f631488172d4caaf7fdb275efa43cbb19e621"} Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.945400 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.948847 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" event={"ID":"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7","Type":"ContainerStarted","Data":"c4a1a52ed7d776b48eaf89eb50d29a223c8d5bdfa0f61a5b13ee5510278040e7"} Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.949023 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" event={"ID":"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7","Type":"ContainerStarted","Data":"b9119ccfa07aee7ec222b7c0517d5bad3a2004de4cece9029bb0c05347adb1be"} Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.949052 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.950639 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.951214 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.951262 4932 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1" exitCode=137 Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.951303 4932 scope.go:117] "RemoveContainer" containerID="ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.951414 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.963873 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" podStartSLOduration=3.9638580020000003 podStartE2EDuration="3.963858002s" podCreationTimestamp="2026-02-18 19:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:42.96090242 +0000 UTC m=+286.542857265" watchObservedRunningTime="2026-02-18 19:38:42.963858002 +0000 UTC m=+286.545812847" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.978711 4932 scope.go:117] "RemoveContainer" containerID="ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1" Feb 18 19:38:42 crc kubenswrapper[4932]: E0218 19:38:42.979149 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1\": container with ID starting with ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1 not found: ID does not exist" containerID="ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.979202 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1"} err="failed to get container status \"ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1\": rpc error: code = NotFound desc = could not find container \"ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1\": container with ID starting with ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1 not found: ID does not exist" Feb 18 19:38:43 crc kubenswrapper[4932]: I0218 19:38:43.017383 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" podStartSLOduration=4.017361689 podStartE2EDuration="4.017361689s" podCreationTimestamp="2026-02-18 19:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:43.013886235 +0000 UTC m=+286.595841100" watchObservedRunningTime="2026-02-18 19:38:43.017361689 +0000 UTC m=+286.599316534" Feb 18 19:38:43 crc kubenswrapper[4932]: I0218 19:38:43.091319 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:43 crc kubenswrapper[4932]: I0218 19:38:43.197039 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 18 19:38:56 crc kubenswrapper[4932]: I0218 19:38:56.942844 4932 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 18 19:38:57 crc kubenswrapper[4932]: I0218 19:38:57.037644 4932 generic.go:334] "Generic (PLEG): container finished" podID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerID="e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e" exitCode=0 Feb 18 19:38:57 crc kubenswrapper[4932]: I0218 19:38:57.037694 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" event={"ID":"e39708f9-5d2d-4ed5-9243-7b71ef470ca7","Type":"ContainerDied","Data":"e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e"} Feb 18 19:38:57 crc kubenswrapper[4932]: I0218 19:38:57.038145 4932 scope.go:117] "RemoveContainer" containerID="e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e" Feb 18 19:38:58 crc kubenswrapper[4932]: I0218 19:38:58.057814 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" event={"ID":"e39708f9-5d2d-4ed5-9243-7b71ef470ca7","Type":"ContainerStarted","Data":"6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe"} Feb 18 19:38:58 crc kubenswrapper[4932]: I0218 19:38:58.059148 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:38:58 crc kubenswrapper[4932]: I0218 19:38:58.061546 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:38:59 crc kubenswrapper[4932]: I0218 19:38:59.052616 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 19:38:59 crc kubenswrapper[4932]: I0218 19:38:59.704777 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-4mdtv"] Feb 18 19:38:59 crc kubenswrapper[4932]: I0218 19:38:59.705027 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" podUID="3c9c4a73-3821-4c75-a01c-d7f77444ff45" containerName="controller-manager" containerID="cri-o://a34f9b69ea1d5920344caa95aa69b9994f98be7d6289cf2c6072102aa51e67e5" gracePeriod=30 Feb 18 19:38:59 crc kubenswrapper[4932]: I0218 19:38:59.714366 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79"] Feb 18 19:38:59 crc kubenswrapper[4932]: I0218 19:38:59.714601 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" podUID="1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" containerName="route-controller-manager" containerID="cri-o://c4a1a52ed7d776b48eaf89eb50d29a223c8d5bdfa0f61a5b13ee5510278040e7" gracePeriod=30 Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.072662 4932 generic.go:334] "Generic (PLEG): container finished" podID="3c9c4a73-3821-4c75-a01c-d7f77444ff45" containerID="a34f9b69ea1d5920344caa95aa69b9994f98be7d6289cf2c6072102aa51e67e5" exitCode=0 Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.072810 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" event={"ID":"3c9c4a73-3821-4c75-a01c-d7f77444ff45","Type":"ContainerDied","Data":"a34f9b69ea1d5920344caa95aa69b9994f98be7d6289cf2c6072102aa51e67e5"} Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.076649 4932 generic.go:334] "Generic (PLEG): container finished" podID="1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" containerID="c4a1a52ed7d776b48eaf89eb50d29a223c8d5bdfa0f61a5b13ee5510278040e7" exitCode=0 Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.076696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" event={"ID":"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7","Type":"ContainerDied","Data":"c4a1a52ed7d776b48eaf89eb50d29a223c8d5bdfa0f61a5b13ee5510278040e7"} Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.299892 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.408780 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423100 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-proxy-ca-bundles\") pod \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423186 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-serving-cert\") pod \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423263 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-client-ca\") pod \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423305 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-config\") pod \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423350 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxtnh\" (UniqueName: \"kubernetes.io/projected/3c9c4a73-3821-4c75-a01c-d7f77444ff45-kube-api-access-fxtnh\") pod \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423376 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-config\") pod \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423397 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-client-ca\") pod \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423434 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2gkf\" (UniqueName: \"kubernetes.io/projected/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-kube-api-access-q2gkf\") pod \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423460 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c9c4a73-3821-4c75-a01c-d7f77444ff45-serving-cert\") pod \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.424682 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3c9c4a73-3821-4c75-a01c-d7f77444ff45" (UID: "3c9c4a73-3821-4c75-a01c-d7f77444ff45"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.425554 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-client-ca" (OuterVolumeSpecName: "client-ca") pod "3c9c4a73-3821-4c75-a01c-d7f77444ff45" (UID: "3c9c4a73-3821-4c75-a01c-d7f77444ff45"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.425571 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-config" (OuterVolumeSpecName: "config") pod "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" (UID: "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.425748 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-config" (OuterVolumeSpecName: "config") pod "3c9c4a73-3821-4c75-a01c-d7f77444ff45" (UID: "3c9c4a73-3821-4c75-a01c-d7f77444ff45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.425797 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-client-ca" (OuterVolumeSpecName: "client-ca") pod "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" (UID: "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.428927 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" (UID: "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.429243 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c9c4a73-3821-4c75-a01c-d7f77444ff45-kube-api-access-fxtnh" (OuterVolumeSpecName: "kube-api-access-fxtnh") pod "3c9c4a73-3821-4c75-a01c-d7f77444ff45" (UID: "3c9c4a73-3821-4c75-a01c-d7f77444ff45"). InnerVolumeSpecName "kube-api-access-fxtnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.429347 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-kube-api-access-q2gkf" (OuterVolumeSpecName: "kube-api-access-q2gkf") pod "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" (UID: "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7"). InnerVolumeSpecName "kube-api-access-q2gkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.429848 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c9c4a73-3821-4c75-a01c-d7f77444ff45-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3c9c4a73-3821-4c75-a01c-d7f77444ff45" (UID: "3c9c4a73-3821-4c75-a01c-d7f77444ff45"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.524871 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2gkf\" (UniqueName: \"kubernetes.io/projected/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-kube-api-access-q2gkf\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.524932 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c9c4a73-3821-4c75-a01c-d7f77444ff45-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.524951 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.524967 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.524983 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.524998 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.525015 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxtnh\" (UniqueName: \"kubernetes.io/projected/3c9c4a73-3821-4c75-a01c-d7f77444ff45-kube-api-access-fxtnh\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.525029 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.525042 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.086846 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" event={"ID":"3c9c4a73-3821-4c75-a01c-d7f77444ff45","Type":"ContainerDied","Data":"2a742d73160052609d346519668f631488172d4caaf7fdb275efa43cbb19e621"} Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.086864 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.087306 4932 scope.go:117] "RemoveContainer" containerID="a34f9b69ea1d5920344caa95aa69b9994f98be7d6289cf2c6072102aa51e67e5" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.089287 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" event={"ID":"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7","Type":"ContainerDied","Data":"b9119ccfa07aee7ec222b7c0517d5bad3a2004de4cece9029bb0c05347adb1be"} Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.089600 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.108298 4932 scope.go:117] "RemoveContainer" containerID="c4a1a52ed7d776b48eaf89eb50d29a223c8d5bdfa0f61a5b13ee5510278040e7" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.138382 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-4mdtv"] Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.143274 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-4mdtv"] Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.152207 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79"] Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.161479 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79"] Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.185069 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" path="/var/lib/kubelet/pods/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7/volumes" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.185686 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c9c4a73-3821-4c75-a01c-d7f77444ff45" path="/var/lib/kubelet/pods/3c9c4a73-3821-4c75-a01c-d7f77444ff45/volumes" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.715862 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-dp67c"] Feb 18 19:39:01 crc kubenswrapper[4932]: E0218 19:39:01.716427 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c9c4a73-3821-4c75-a01c-d7f77444ff45" containerName="controller-manager" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.716475 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c9c4a73-3821-4c75-a01c-d7f77444ff45" containerName="controller-manager" Feb 18 19:39:01 crc kubenswrapper[4932]: E0218 19:39:01.716537 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" containerName="route-controller-manager" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.716559 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" containerName="route-controller-manager" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.716793 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" containerName="route-controller-manager" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.716845 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c9c4a73-3821-4c75-a01c-d7f77444ff45" containerName="controller-manager" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.717738 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.718460 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm"] Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.719090 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.723612 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.723982 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.725123 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.725394 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.725621 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.725853 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.726041 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.727406 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.727464 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.727488 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.729519 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.730293 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.741995 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-dp67c"] Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.742274 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.759134 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm"] Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.843867 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-config\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.843944 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-serving-cert\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.844098 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-client-ca\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.844455 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5pz5\" (UniqueName: \"kubernetes.io/projected/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-kube-api-access-c5pz5\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.844659 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16265115-064b-4308-8c41-b58e058ed40d-serving-cert\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.844826 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plxfn\" (UniqueName: \"kubernetes.io/projected/16265115-064b-4308-8c41-b58e058ed40d-kube-api-access-plxfn\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.844990 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-client-ca\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.845146 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-proxy-ca-bundles\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.845358 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-config\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946069 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16265115-064b-4308-8c41-b58e058ed40d-serving-cert\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946128 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plxfn\" (UniqueName: \"kubernetes.io/projected/16265115-064b-4308-8c41-b58e058ed40d-kube-api-access-plxfn\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946157 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-client-ca\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946196 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-proxy-ca-bundles\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946220 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-config\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946252 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-config\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946282 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-serving-cert\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946306 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-client-ca\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946335 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5pz5\" (UniqueName: \"kubernetes.io/projected/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-kube-api-access-c5pz5\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.947432 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-client-ca\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.947768 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-client-ca\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.948453 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-config\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.949482 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-proxy-ca-bundles\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.949845 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-config\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.950871 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16265115-064b-4308-8c41-b58e058ed40d-serving-cert\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.951656 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-serving-cert\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.970693 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plxfn\" (UniqueName: \"kubernetes.io/projected/16265115-064b-4308-8c41-b58e058ed40d-kube-api-access-plxfn\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.972928 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5pz5\" (UniqueName: \"kubernetes.io/projected/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-kube-api-access-c5pz5\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:02 crc kubenswrapper[4932]: I0218 19:39:02.059825 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:02 crc kubenswrapper[4932]: I0218 19:39:02.073285 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:02 crc kubenswrapper[4932]: I0218 19:39:02.572022 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-dp67c"] Feb 18 19:39:02 crc kubenswrapper[4932]: W0218 19:39:02.586958 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93e7ae54_b8ce_4890_9901_514d2f4b7f0a.slice/crio-69e9e84f33df4568345d5c91cf10b5627bbd157605a893896cb062f70472ec3a WatchSource:0}: Error finding container 69e9e84f33df4568345d5c91cf10b5627bbd157605a893896cb062f70472ec3a: Status 404 returned error can't find the container with id 69e9e84f33df4568345d5c91cf10b5627bbd157605a893896cb062f70472ec3a Feb 18 19:39:02 crc kubenswrapper[4932]: I0218 19:39:02.618116 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm"] Feb 18 19:39:02 crc kubenswrapper[4932]: W0218 19:39:02.624642 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16265115_064b_4308_8c41_b58e058ed40d.slice/crio-b8a23dcbd1d8cf02657c91f0e61f4f45339881b0ffae9e25a18d47cdddc65610 WatchSource:0}: Error finding container b8a23dcbd1d8cf02657c91f0e61f4f45339881b0ffae9e25a18d47cdddc65610: Status 404 returned error can't find the container with id b8a23dcbd1d8cf02657c91f0e61f4f45339881b0ffae9e25a18d47cdddc65610 Feb 18 19:39:02 crc kubenswrapper[4932]: I0218 19:39:02.902527 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.105426 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" event={"ID":"93e7ae54-b8ce-4890-9901-514d2f4b7f0a","Type":"ContainerStarted","Data":"250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b"} Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.105745 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.105756 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" event={"ID":"93e7ae54-b8ce-4890-9901-514d2f4b7f0a","Type":"ContainerStarted","Data":"69e9e84f33df4568345d5c91cf10b5627bbd157605a893896cb062f70472ec3a"} Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.106645 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" event={"ID":"16265115-064b-4308-8c41-b58e058ed40d","Type":"ContainerStarted","Data":"3f7b74fac2a032d3567d6e305c53304299a33545185746dd3b1ef2ca283a7fbf"} Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.106678 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" event={"ID":"16265115-064b-4308-8c41-b58e058ed40d","Type":"ContainerStarted","Data":"b8a23dcbd1d8cf02657c91f0e61f4f45339881b0ffae9e25a18d47cdddc65610"} Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.107126 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.112289 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.112596 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.124870 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" podStartSLOduration=4.124851028 podStartE2EDuration="4.124851028s" podCreationTimestamp="2026-02-18 19:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:03.121044105 +0000 UTC m=+306.702998940" watchObservedRunningTime="2026-02-18 19:39:03.124851028 +0000 UTC m=+306.706805883" Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.137209 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" podStartSLOduration=4.137189057 podStartE2EDuration="4.137189057s" podCreationTimestamp="2026-02-18 19:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:03.133900527 +0000 UTC m=+306.715855372" watchObservedRunningTime="2026-02-18 19:39:03.137189057 +0000 UTC m=+306.719143892" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.383847 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5kmmh"] Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.384952 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.400031 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5kmmh"] Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.498998 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a1a50912-ee96-4a51-8ad1-49a83e229618-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.499367 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1a50912-ee96-4a51-8ad1-49a83e229618-trusted-ca\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.499402 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a1a50912-ee96-4a51-8ad1-49a83e229618-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.499420 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a1a50912-ee96-4a51-8ad1-49a83e229618-registry-certificates\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.499446 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.499603 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-registry-tls\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.499652 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqdkx\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-kube-api-access-mqdkx\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.499679 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-bound-sa-token\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.517875 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.600507 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-registry-tls\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.600541 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqdkx\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-kube-api-access-mqdkx\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.600559 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-bound-sa-token\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.600584 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a1a50912-ee96-4a51-8ad1-49a83e229618-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.600624 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1a50912-ee96-4a51-8ad1-49a83e229618-trusted-ca\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.600666 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a1a50912-ee96-4a51-8ad1-49a83e229618-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.600682 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a1a50912-ee96-4a51-8ad1-49a83e229618-registry-certificates\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.601740 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a1a50912-ee96-4a51-8ad1-49a83e229618-registry-certificates\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.603547 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a1a50912-ee96-4a51-8ad1-49a83e229618-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.604775 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1a50912-ee96-4a51-8ad1-49a83e229618-trusted-ca\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.615705 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-registry-tls\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.619581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a1a50912-ee96-4a51-8ad1-49a83e229618-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.620518 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqdkx\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-kube-api-access-mqdkx\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.624727 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-bound-sa-token\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.707075 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:06 crc kubenswrapper[4932]: I0218 19:39:06.096480 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5kmmh"] Feb 18 19:39:06 crc kubenswrapper[4932]: W0218 19:39:06.099916 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a50912_ee96_4a51_8ad1_49a83e229618.slice/crio-8f70847d27880691b67883ba9738b19c7187ede4de1e897ffb8f7923a9c8a4c2 WatchSource:0}: Error finding container 8f70847d27880691b67883ba9738b19c7187ede4de1e897ffb8f7923a9c8a4c2: Status 404 returned error can't find the container with id 8f70847d27880691b67883ba9738b19c7187ede4de1e897ffb8f7923a9c8a4c2 Feb 18 19:39:06 crc kubenswrapper[4932]: I0218 19:39:06.126751 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" event={"ID":"a1a50912-ee96-4a51-8ad1-49a83e229618","Type":"ContainerStarted","Data":"8f70847d27880691b67883ba9738b19c7187ede4de1e897ffb8f7923a9c8a4c2"} Feb 18 19:39:07 crc kubenswrapper[4932]: I0218 19:39:07.134291 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" event={"ID":"a1a50912-ee96-4a51-8ad1-49a83e229618","Type":"ContainerStarted","Data":"5f6ea72debe2de4aaf5d2b14a806ea124708968c517e218ddb73f15a1487b163"} Feb 18 19:39:07 crc kubenswrapper[4932]: I0218 19:39:07.134678 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:07 crc kubenswrapper[4932]: I0218 19:39:07.153165 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" podStartSLOduration=2.153149621 podStartE2EDuration="2.153149621s" podCreationTimestamp="2026-02-18 19:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:07.150962618 +0000 UTC m=+310.732917513" watchObservedRunningTime="2026-02-18 19:39:07.153149621 +0000 UTC m=+310.735104466" Feb 18 19:39:19 crc kubenswrapper[4932]: I0218 19:39:19.743108 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm"] Feb 18 19:39:19 crc kubenswrapper[4932]: I0218 19:39:19.744115 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" podUID="16265115-064b-4308-8c41-b58e058ed40d" containerName="route-controller-manager" containerID="cri-o://3f7b74fac2a032d3567d6e305c53304299a33545185746dd3b1ef2ca283a7fbf" gracePeriod=30 Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.213727 4932 generic.go:334] "Generic (PLEG): container finished" podID="16265115-064b-4308-8c41-b58e058ed40d" containerID="3f7b74fac2a032d3567d6e305c53304299a33545185746dd3b1ef2ca283a7fbf" exitCode=0 Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.213864 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" event={"ID":"16265115-064b-4308-8c41-b58e058ed40d","Type":"ContainerDied","Data":"3f7b74fac2a032d3567d6e305c53304299a33545185746dd3b1ef2ca283a7fbf"} Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.428516 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.511651 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-config\") pod \"16265115-064b-4308-8c41-b58e058ed40d\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.511795 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plxfn\" (UniqueName: \"kubernetes.io/projected/16265115-064b-4308-8c41-b58e058ed40d-kube-api-access-plxfn\") pod \"16265115-064b-4308-8c41-b58e058ed40d\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.511836 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16265115-064b-4308-8c41-b58e058ed40d-serving-cert\") pod \"16265115-064b-4308-8c41-b58e058ed40d\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.511881 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-client-ca\") pod \"16265115-064b-4308-8c41-b58e058ed40d\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.512805 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-client-ca" (OuterVolumeSpecName: "client-ca") pod "16265115-064b-4308-8c41-b58e058ed40d" (UID: "16265115-064b-4308-8c41-b58e058ed40d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.513969 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-config" (OuterVolumeSpecName: "config") pod "16265115-064b-4308-8c41-b58e058ed40d" (UID: "16265115-064b-4308-8c41-b58e058ed40d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.518569 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16265115-064b-4308-8c41-b58e058ed40d-kube-api-access-plxfn" (OuterVolumeSpecName: "kube-api-access-plxfn") pod "16265115-064b-4308-8c41-b58e058ed40d" (UID: "16265115-064b-4308-8c41-b58e058ed40d"). InnerVolumeSpecName "kube-api-access-plxfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.518979 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16265115-064b-4308-8c41-b58e058ed40d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16265115-064b-4308-8c41-b58e058ed40d" (UID: "16265115-064b-4308-8c41-b58e058ed40d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.613929 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plxfn\" (UniqueName: \"kubernetes.io/projected/16265115-064b-4308-8c41-b58e058ed40d-kube-api-access-plxfn\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.614202 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16265115-064b-4308-8c41-b58e058ed40d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.614213 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.614222 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.219788 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" event={"ID":"16265115-064b-4308-8c41-b58e058ed40d","Type":"ContainerDied","Data":"b8a23dcbd1d8cf02657c91f0e61f4f45339881b0ffae9e25a18d47cdddc65610"} Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.219853 4932 scope.go:117] "RemoveContainer" containerID="3f7b74fac2a032d3567d6e305c53304299a33545185746dd3b1ef2ca283a7fbf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.219864 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.242732 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm"] Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.248658 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm"] Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.736088 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf"] Feb 18 19:39:21 crc kubenswrapper[4932]: E0218 19:39:21.736570 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16265115-064b-4308-8c41-b58e058ed40d" containerName="route-controller-manager" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.736612 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="16265115-064b-4308-8c41-b58e058ed40d" containerName="route-controller-manager" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.736868 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="16265115-064b-4308-8c41-b58e058ed40d" containerName="route-controller-manager" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.737718 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.739944 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.742961 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.743392 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.744419 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.744793 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.745520 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.747877 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf"] Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.832910 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd7wb\" (UniqueName: \"kubernetes.io/projected/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-kube-api-access-fd7wb\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.833037 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-serving-cert\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.833084 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-config\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.833127 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-client-ca\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.934730 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-client-ca\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.934839 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd7wb\" (UniqueName: \"kubernetes.io/projected/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-kube-api-access-fd7wb\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.934922 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-serving-cert\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.934954 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-config\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.936520 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-client-ca\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.936575 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-config\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.949127 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-serving-cert\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.971814 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd7wb\" (UniqueName: \"kubernetes.io/projected/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-kube-api-access-fd7wb\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:22 crc kubenswrapper[4932]: I0218 19:39:22.060465 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:22 crc kubenswrapper[4932]: I0218 19:39:22.571929 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf"] Feb 18 19:39:22 crc kubenswrapper[4932]: W0218 19:39:22.572312 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc2d4d85_8ed9_4c7b_bc43_d8120f8c85ed.slice/crio-fecc0f0c2e2081b30c8893cc5db2b3e3dd9207810d289e3b797e80f0344a03a1 WatchSource:0}: Error finding container fecc0f0c2e2081b30c8893cc5db2b3e3dd9207810d289e3b797e80f0344a03a1: Status 404 returned error can't find the container with id fecc0f0c2e2081b30c8893cc5db2b3e3dd9207810d289e3b797e80f0344a03a1 Feb 18 19:39:23 crc kubenswrapper[4932]: I0218 19:39:23.190371 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16265115-064b-4308-8c41-b58e058ed40d" path="/var/lib/kubelet/pods/16265115-064b-4308-8c41-b58e058ed40d/volumes" Feb 18 19:39:23 crc kubenswrapper[4932]: I0218 19:39:23.255452 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" event={"ID":"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed","Type":"ContainerStarted","Data":"10c4a79a1ec9f093dd19c9ad8769bd988f1ea90dbe807ac6b81d666fb30e9743"} Feb 18 19:39:23 crc kubenswrapper[4932]: I0218 19:39:23.255540 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" event={"ID":"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed","Type":"ContainerStarted","Data":"fecc0f0c2e2081b30c8893cc5db2b3e3dd9207810d289e3b797e80f0344a03a1"} Feb 18 19:39:23 crc kubenswrapper[4932]: I0218 19:39:23.255736 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:23 crc kubenswrapper[4932]: I0218 19:39:23.285000 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" podStartSLOduration=4.284969997 podStartE2EDuration="4.284969997s" podCreationTimestamp="2026-02-18 19:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:23.283888261 +0000 UTC m=+326.865843106" watchObservedRunningTime="2026-02-18 19:39:23.284969997 +0000 UTC m=+326.866924882" Feb 18 19:39:23 crc kubenswrapper[4932]: I0218 19:39:23.328807 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:25 crc kubenswrapper[4932]: I0218 19:39:25.715052 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:25 crc kubenswrapper[4932]: I0218 19:39:25.787968 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlcbj"] Feb 18 19:39:50 crc kubenswrapper[4932]: I0218 19:39:50.836124 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" podUID="fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" containerName="registry" containerID="cri-o://977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c" gracePeriod=30 Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.285580 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.378959 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-ca-trust-extracted\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.379025 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-bound-sa-token\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.379063 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-certificates\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.379118 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxwvv\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-kube-api-access-kxwvv\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.379151 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-tls\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.379200 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-trusted-ca\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.379231 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-installation-pull-secrets\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.379380 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.380007 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.380057 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.385933 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.385947 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-kube-api-access-kxwvv" (OuterVolumeSpecName: "kube-api-access-kxwvv") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "kube-api-access-kxwvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.386362 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.386681 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.395162 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.395753 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.451341 4932 generic.go:334] "Generic (PLEG): container finished" podID="fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" containerID="977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c" exitCode=0 Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.451400 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.451395 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" event={"ID":"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22","Type":"ContainerDied","Data":"977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c"} Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.451450 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" event={"ID":"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22","Type":"ContainerDied","Data":"e2cd6e9fe7b91c0ea246bc59cf9d11b75cc0eb7a103b52573fd6adf6936ac914"} Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.451473 4932 scope.go:117] "RemoveContainer" containerID="977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.471771 4932 scope.go:117] "RemoveContainer" containerID="977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c" Feb 18 19:39:51 crc kubenswrapper[4932]: E0218 19:39:51.472378 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c\": container with ID starting with 977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c not found: ID does not exist" containerID="977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.472448 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c"} err="failed to get container status \"977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c\": rpc error: code = NotFound desc = could not find container \"977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c\": container with ID starting with 977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c not found: ID does not exist" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.481146 4932 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.481199 4932 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.481211 4932 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.481222 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxwvv\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-kube-api-access-kxwvv\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.481234 4932 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.481244 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.481254 4932 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.491311 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlcbj"] Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.497430 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlcbj"] Feb 18 19:39:53 crc kubenswrapper[4932]: I0218 19:39:53.189071 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" path="/var/lib/kubelet/pods/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22/volumes" Feb 18 19:39:57 crc kubenswrapper[4932]: I0218 19:39:57.606421 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:39:57 crc kubenswrapper[4932]: I0218 19:39:57.607052 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.037157 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qvwc8"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.037947 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qvwc8" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="registry-server" containerID="cri-o://6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26" gracePeriod=30 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.050920 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j2xgw"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.051279 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j2xgw" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="registry-server" containerID="cri-o://13c758cf33ac2064fd2a2bac98c4ca52868f7188bbf8e3e8b926c0341705af4b" gracePeriod=30 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.063199 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5c79p"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.063516 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" containerID="cri-o://6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe" gracePeriod=30 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.072566 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w2tj"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.072827 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4w2tj" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="registry-server" containerID="cri-o://8629bd2837aebb06f17bda76bfe6b4989212f8b67eec3674f76174649de59a2e" gracePeriod=30 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.085536 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-44vtg"] Feb 18 19:39:59 crc kubenswrapper[4932]: E0218 19:39:59.085933 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" containerName="registry" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.085970 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" containerName="registry" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.086145 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" containerName="registry" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.086880 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.091824 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chh8j"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.092135 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-chh8j" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="registry-server" containerID="cri-o://e34e27d0659e0d99e6372515305dc5e1613a602751683fd615bb6bd8747d32f2" gracePeriod=30 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.105582 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-44vtg"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.190255 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnzpc\" (UniqueName: \"kubernetes.io/projected/58ed1571-b94a-4792-9c8f-ead2f0596e42-kube-api-access-lnzpc\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.190301 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58ed1571-b94a-4792-9c8f-ead2f0596e42-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.190332 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58ed1571-b94a-4792-9c8f-ead2f0596e42-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.291370 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnzpc\" (UniqueName: \"kubernetes.io/projected/58ed1571-b94a-4792-9c8f-ead2f0596e42-kube-api-access-lnzpc\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.291426 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58ed1571-b94a-4792-9c8f-ead2f0596e42-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.291461 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58ed1571-b94a-4792-9c8f-ead2f0596e42-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.292796 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58ed1571-b94a-4792-9c8f-ead2f0596e42-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.301357 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58ed1571-b94a-4792-9c8f-ead2f0596e42-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.310639 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnzpc\" (UniqueName: \"kubernetes.io/projected/58ed1571-b94a-4792-9c8f-ead2f0596e42-kube-api-access-lnzpc\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.465146 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.483374 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.502861 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.511895 4932 generic.go:334] "Generic (PLEG): container finished" podID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerID="8629bd2837aebb06f17bda76bfe6b4989212f8b67eec3674f76174649de59a2e" exitCode=0 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.511999 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w2tj" event={"ID":"b77a623a-ff2e-45aa-9004-b211b0200a3f","Type":"ContainerDied","Data":"8629bd2837aebb06f17bda76bfe6b4989212f8b67eec3674f76174649de59a2e"} Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.513829 4932 generic.go:334] "Generic (PLEG): container finished" podID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerID="6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe" exitCode=0 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.513919 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" event={"ID":"e39708f9-5d2d-4ed5-9243-7b71ef470ca7","Type":"ContainerDied","Data":"6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe"} Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.513953 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" event={"ID":"e39708f9-5d2d-4ed5-9243-7b71ef470ca7","Type":"ContainerDied","Data":"784badddcd9797871fec35aacb4b375a077788de958864c50c207fa8ea3d3eb2"} Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.513979 4932 scope.go:117] "RemoveContainer" containerID="6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.514374 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.524313 4932 generic.go:334] "Generic (PLEG): container finished" podID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerID="6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26" exitCode=0 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.524788 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.524792 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvwc8" event={"ID":"cafe1e82-ef19-4345-825e-cc9bf016b353","Type":"ContainerDied","Data":"6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26"} Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.525102 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvwc8" event={"ID":"cafe1e82-ef19-4345-825e-cc9bf016b353","Type":"ContainerDied","Data":"94c56c7588969970298ca76c9989e0d42da323b423ba2e42eec0825109130ea6"} Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.537663 4932 scope.go:117] "RemoveContainer" containerID="e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.540596 4932 generic.go:334] "Generic (PLEG): container finished" podID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerID="e34e27d0659e0d99e6372515305dc5e1613a602751683fd615bb6bd8747d32f2" exitCode=0 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.540668 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chh8j" event={"ID":"ce921030-ec82-420d-a9e7-cd04ee7e055b","Type":"ContainerDied","Data":"e34e27d0659e0d99e6372515305dc5e1613a602751683fd615bb6bd8747d32f2"} Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.544092 4932 generic.go:334] "Generic (PLEG): container finished" podID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerID="13c758cf33ac2064fd2a2bac98c4ca52868f7188bbf8e3e8b926c0341705af4b" exitCode=0 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.544115 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2xgw" event={"ID":"62bbf001-ce57-471f-ad28-1d892d0d30e9","Type":"ContainerDied","Data":"13c758cf33ac2064fd2a2bac98c4ca52868f7188bbf8e3e8b926c0341705af4b"} Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.576031 4932 scope.go:117] "RemoveContainer" containerID="6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe" Feb 18 19:39:59 crc kubenswrapper[4932]: E0218 19:39:59.577928 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe\": container with ID starting with 6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe not found: ID does not exist" containerID="6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.577964 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe"} err="failed to get container status \"6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe\": rpc error: code = NotFound desc = could not find container \"6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe\": container with ID starting with 6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe not found: ID does not exist" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.577989 4932 scope.go:117] "RemoveContainer" containerID="e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e" Feb 18 19:39:59 crc kubenswrapper[4932]: E0218 19:39:59.578258 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e\": container with ID starting with e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e not found: ID does not exist" containerID="e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.578281 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e"} err="failed to get container status \"e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e\": rpc error: code = NotFound desc = could not find container \"e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e\": container with ID starting with e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e not found: ID does not exist" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.578294 4932 scope.go:117] "RemoveContainer" containerID="6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.594880 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-utilities\") pod \"cafe1e82-ef19-4345-825e-cc9bf016b353\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.594932 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbm4m\" (UniqueName: \"kubernetes.io/projected/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-kube-api-access-rbm4m\") pod \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.594951 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-trusted-ca\") pod \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.595007 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr45c\" (UniqueName: \"kubernetes.io/projected/cafe1e82-ef19-4345-825e-cc9bf016b353-kube-api-access-sr45c\") pod \"cafe1e82-ef19-4345-825e-cc9bf016b353\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.595037 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-operator-metrics\") pod \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.595072 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-catalog-content\") pod \"cafe1e82-ef19-4345-825e-cc9bf016b353\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.597550 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e39708f9-5d2d-4ed5-9243-7b71ef470ca7" (UID: "e39708f9-5d2d-4ed5-9243-7b71ef470ca7"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.599889 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-kube-api-access-rbm4m" (OuterVolumeSpecName: "kube-api-access-rbm4m") pod "e39708f9-5d2d-4ed5-9243-7b71ef470ca7" (UID: "e39708f9-5d2d-4ed5-9243-7b71ef470ca7"). InnerVolumeSpecName "kube-api-access-rbm4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.600057 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e39708f9-5d2d-4ed5-9243-7b71ef470ca7" (UID: "e39708f9-5d2d-4ed5-9243-7b71ef470ca7"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.601619 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cafe1e82-ef19-4345-825e-cc9bf016b353-kube-api-access-sr45c" (OuterVolumeSpecName: "kube-api-access-sr45c") pod "cafe1e82-ef19-4345-825e-cc9bf016b353" (UID: "cafe1e82-ef19-4345-825e-cc9bf016b353"). InnerVolumeSpecName "kube-api-access-sr45c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.603728 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-utilities" (OuterVolumeSpecName: "utilities") pod "cafe1e82-ef19-4345-825e-cc9bf016b353" (UID: "cafe1e82-ef19-4345-825e-cc9bf016b353"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.610340 4932 scope.go:117] "RemoveContainer" containerID="615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.660125 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.663283 4932 scope.go:117] "RemoveContainer" containerID="b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.673429 4932 scope.go:117] "RemoveContainer" containerID="6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26" Feb 18 19:39:59 crc kubenswrapper[4932]: E0218 19:39:59.673752 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26\": container with ID starting with 6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26 not found: ID does not exist" containerID="6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.673779 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26"} err="failed to get container status \"6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26\": rpc error: code = NotFound desc = could not find container \"6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26\": container with ID starting with 6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26 not found: ID does not exist" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.677363 4932 scope.go:117] "RemoveContainer" containerID="615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13" Feb 18 19:39:59 crc kubenswrapper[4932]: E0218 19:39:59.678711 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13\": container with ID starting with 615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13 not found: ID does not exist" containerID="615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.678741 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13"} err="failed to get container status \"615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13\": rpc error: code = NotFound desc = could not find container \"615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13\": container with ID starting with 615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13 not found: ID does not exist" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.678760 4932 scope.go:117] "RemoveContainer" containerID="b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f" Feb 18 19:39:59 crc kubenswrapper[4932]: E0218 19:39:59.679878 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f\": container with ID starting with b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f not found: ID does not exist" containerID="b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.679900 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f"} err="failed to get container status \"b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f\": rpc error: code = NotFound desc = could not find container \"b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f\": container with ID starting with b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f not found: ID does not exist" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.681587 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cafe1e82-ef19-4345-825e-cc9bf016b353" (UID: "cafe1e82-ef19-4345-825e-cc9bf016b353"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.699356 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.699385 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.699395 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbm4m\" (UniqueName: \"kubernetes.io/projected/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-kube-api-access-rbm4m\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.699408 4932 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.699417 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr45c\" (UniqueName: \"kubernetes.io/projected/cafe1e82-ef19-4345-825e-cc9bf016b353-kube-api-access-sr45c\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.699427 4932 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.710591 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.719434 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-dp67c"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.719872 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" podUID="93e7ae54-b8ce-4890-9901-514d2f4b7f0a" containerName="controller-manager" containerID="cri-o://250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b" gracePeriod=30 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.751925 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.800064 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-utilities\") pod \"ce921030-ec82-420d-a9e7-cd04ee7e055b\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.800135 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-catalog-content\") pod \"ce921030-ec82-420d-a9e7-cd04ee7e055b\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.800195 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc64x\" (UniqueName: \"kubernetes.io/projected/ce921030-ec82-420d-a9e7-cd04ee7e055b-kube-api-access-fc64x\") pod \"ce921030-ec82-420d-a9e7-cd04ee7e055b\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.800225 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-catalog-content\") pod \"62bbf001-ce57-471f-ad28-1d892d0d30e9\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.800593 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgm8v\" (UniqueName: \"kubernetes.io/projected/62bbf001-ce57-471f-ad28-1d892d0d30e9-kube-api-access-rgm8v\") pod \"62bbf001-ce57-471f-ad28-1d892d0d30e9\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.800661 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-utilities\") pod \"62bbf001-ce57-471f-ad28-1d892d0d30e9\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.800781 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-utilities" (OuterVolumeSpecName: "utilities") pod "ce921030-ec82-420d-a9e7-cd04ee7e055b" (UID: "ce921030-ec82-420d-a9e7-cd04ee7e055b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.801639 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.802229 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-utilities" (OuterVolumeSpecName: "utilities") pod "62bbf001-ce57-471f-ad28-1d892d0d30e9" (UID: "62bbf001-ce57-471f-ad28-1d892d0d30e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.805138 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce921030-ec82-420d-a9e7-cd04ee7e055b-kube-api-access-fc64x" (OuterVolumeSpecName: "kube-api-access-fc64x") pod "ce921030-ec82-420d-a9e7-cd04ee7e055b" (UID: "ce921030-ec82-420d-a9e7-cd04ee7e055b"). InnerVolumeSpecName "kube-api-access-fc64x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.805453 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62bbf001-ce57-471f-ad28-1d892d0d30e9-kube-api-access-rgm8v" (OuterVolumeSpecName: "kube-api-access-rgm8v") pod "62bbf001-ce57-471f-ad28-1d892d0d30e9" (UID: "62bbf001-ce57-471f-ad28-1d892d0d30e9"). InnerVolumeSpecName "kube-api-access-rgm8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.840418 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5c79p"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.847975 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5c79p"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.860617 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qvwc8"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.861295 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62bbf001-ce57-471f-ad28-1d892d0d30e9" (UID: "62bbf001-ce57-471f-ad28-1d892d0d30e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.865488 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qvwc8"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.903160 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lttp\" (UniqueName: \"kubernetes.io/projected/b77a623a-ff2e-45aa-9004-b211b0200a3f-kube-api-access-7lttp\") pod \"b77a623a-ff2e-45aa-9004-b211b0200a3f\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.903448 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-catalog-content\") pod \"b77a623a-ff2e-45aa-9004-b211b0200a3f\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.903529 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-utilities\") pod \"b77a623a-ff2e-45aa-9004-b211b0200a3f\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.903722 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.903751 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc64x\" (UniqueName: \"kubernetes.io/projected/ce921030-ec82-420d-a9e7-cd04ee7e055b-kube-api-access-fc64x\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.903760 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.903783 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgm8v\" (UniqueName: \"kubernetes.io/projected/62bbf001-ce57-471f-ad28-1d892d0d30e9-kube-api-access-rgm8v\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.905441 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-utilities" (OuterVolumeSpecName: "utilities") pod "b77a623a-ff2e-45aa-9004-b211b0200a3f" (UID: "b77a623a-ff2e-45aa-9004-b211b0200a3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.907700 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b77a623a-ff2e-45aa-9004-b211b0200a3f-kube-api-access-7lttp" (OuterVolumeSpecName: "kube-api-access-7lttp") pod "b77a623a-ff2e-45aa-9004-b211b0200a3f" (UID: "b77a623a-ff2e-45aa-9004-b211b0200a3f"). InnerVolumeSpecName "kube-api-access-7lttp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.937242 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b77a623a-ff2e-45aa-9004-b211b0200a3f" (UID: "b77a623a-ff2e-45aa-9004-b211b0200a3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.947883 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-44vtg"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.948076 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce921030-ec82-420d-a9e7-cd04ee7e055b" (UID: "ce921030-ec82-420d-a9e7-cd04ee7e055b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: W0218 19:39:59.965191 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58ed1571_b94a_4792_9c8f_ead2f0596e42.slice/crio-05111da017dff01613e4d083a44df3ba4c246bd1e9c23a6764b357c306e87978 WatchSource:0}: Error finding container 05111da017dff01613e4d083a44df3ba4c246bd1e9c23a6764b357c306e87978: Status 404 returned error can't find the container with id 05111da017dff01613e4d083a44df3ba4c246bd1e9c23a6764b357c306e87978 Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.004874 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lttp\" (UniqueName: \"kubernetes.io/projected/b77a623a-ff2e-45aa-9004-b211b0200a3f-kube-api-access-7lttp\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.004900 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.004909 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.004917 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.054019 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.206722 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-client-ca\") pod \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.206839 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-serving-cert\") pod \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.206909 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5pz5\" (UniqueName: \"kubernetes.io/projected/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-kube-api-access-c5pz5\") pod \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.206929 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-config\") pod \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.206970 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-proxy-ca-bundles\") pod \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.207673 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-client-ca" (OuterVolumeSpecName: "client-ca") pod "93e7ae54-b8ce-4890-9901-514d2f4b7f0a" (UID: "93e7ae54-b8ce-4890-9901-514d2f4b7f0a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.208007 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-config" (OuterVolumeSpecName: "config") pod "93e7ae54-b8ce-4890-9901-514d2f4b7f0a" (UID: "93e7ae54-b8ce-4890-9901-514d2f4b7f0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.208040 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "93e7ae54-b8ce-4890-9901-514d2f4b7f0a" (UID: "93e7ae54-b8ce-4890-9901-514d2f4b7f0a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.210813 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-kube-api-access-c5pz5" (OuterVolumeSpecName: "kube-api-access-c5pz5") pod "93e7ae54-b8ce-4890-9901-514d2f4b7f0a" (UID: "93e7ae54-b8ce-4890-9901-514d2f4b7f0a"). InnerVolumeSpecName "kube-api-access-c5pz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.211928 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "93e7ae54-b8ce-4890-9901-514d2f4b7f0a" (UID: "93e7ae54-b8ce-4890-9901-514d2f4b7f0a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.308478 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.308530 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5pz5\" (UniqueName: \"kubernetes.io/projected/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-kube-api-access-c5pz5\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.308548 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.308562 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.308576 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.552128 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w2tj" event={"ID":"b77a623a-ff2e-45aa-9004-b211b0200a3f","Type":"ContainerDied","Data":"79bf00f2e14eaea6ac861e5d5414045b4e7af7c9494be58a0ddf97f7bbd0066e"} Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.552158 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.552222 4932 scope.go:117] "RemoveContainer" containerID="8629bd2837aebb06f17bda76bfe6b4989212f8b67eec3674f76174649de59a2e" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.557159 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chh8j" event={"ID":"ce921030-ec82-420d-a9e7-cd04ee7e055b","Type":"ContainerDied","Data":"df158c2125177f92039a79a6401f4bb6f7b2c14373fe74c537b86d94e6f1ab0e"} Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.557233 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.558320 4932 generic.go:334] "Generic (PLEG): container finished" podID="93e7ae54-b8ce-4890-9901-514d2f4b7f0a" containerID="250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b" exitCode=0 Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.558387 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" event={"ID":"93e7ae54-b8ce-4890-9901-514d2f4b7f0a","Type":"ContainerDied","Data":"250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b"} Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.558408 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" event={"ID":"93e7ae54-b8ce-4890-9901-514d2f4b7f0a","Type":"ContainerDied","Data":"69e9e84f33df4568345d5c91cf10b5627bbd157605a893896cb062f70472ec3a"} Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.558460 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.567282 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2xgw" event={"ID":"62bbf001-ce57-471f-ad28-1d892d0d30e9","Type":"ContainerDied","Data":"598a3819cd069f787a558e804a3b29d8f39ee54c7fd7148d56ad085f056a9d34"} Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.567518 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.569415 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" event={"ID":"58ed1571-b94a-4792-9c8f-ead2f0596e42","Type":"ContainerStarted","Data":"705af7e82397d874c7abff7a640e68623d95a89fe326a1d8a328c9df6252c17d"} Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.569455 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" event={"ID":"58ed1571-b94a-4792-9c8f-ead2f0596e42","Type":"ContainerStarted","Data":"05111da017dff01613e4d083a44df3ba4c246bd1e9c23a6764b357c306e87978"} Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.570036 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.578162 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.578278 4932 scope.go:117] "RemoveContainer" containerID="399844cbfb1eed438dbae81663b568d5834893c25f35e7193be65debdd42cfaa" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.596911 4932 scope.go:117] "RemoveContainer" containerID="a6fd3575dcddfe36fd8dfcc8e6bcb0f7035ca23b01b700d078f298418c1896e8" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.607368 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" podStartSLOduration=1.606238883 podStartE2EDuration="1.606238883s" podCreationTimestamp="2026-02-18 19:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:00.603625729 +0000 UTC m=+364.185580594" watchObservedRunningTime="2026-02-18 19:40:00.606238883 +0000 UTC m=+364.188193728" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.628153 4932 scope.go:117] "RemoveContainer" containerID="e34e27d0659e0d99e6372515305dc5e1613a602751683fd615bb6bd8747d32f2" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.651237 4932 scope.go:117] "RemoveContainer" containerID="fb361fdaea379654dbc86cd68517d68e807abad8cc09c0668f73e69287045372" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.651458 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w2tj"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.661351 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w2tj"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.668221 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chh8j"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.674413 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-chh8j"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.675957 4932 scope.go:117] "RemoveContainer" containerID="bda935338a806285152d3571a5562901d0dc27851a41082e686230cc48a54915" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.682043 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-dp67c"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.684538 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-dp67c"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.691630 4932 scope.go:117] "RemoveContainer" containerID="250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.693366 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j2xgw"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.696972 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j2xgw"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.707978 4932 scope.go:117] "RemoveContainer" containerID="250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.708395 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b\": container with ID starting with 250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b not found: ID does not exist" containerID="250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.708426 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b"} err="failed to get container status \"250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b\": rpc error: code = NotFound desc = could not find container \"250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b\": container with ID starting with 250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b not found: ID does not exist" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.708461 4932 scope.go:117] "RemoveContainer" containerID="13c758cf33ac2064fd2a2bac98c4ca52868f7188bbf8e3e8b926c0341705af4b" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.723397 4932 scope.go:117] "RemoveContainer" containerID="5fa3af86ad8e20edc339dfb0d7d75e1dba3410f262c6355782e4c035746708c1" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.744658 4932 scope.go:117] "RemoveContainer" containerID="6d3ff69895d4bcdcf15d410bfbcd335c0b79b07284d7d99d33d18f064ce3f033" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757691 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-s6z9t"] Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757875 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757891 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757901 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757907 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757917 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757923 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757932 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757938 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757947 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757952 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757962 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757967 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757978 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757985 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757993 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e7ae54-b8ce-4890-9901-514d2f4b7f0a" containerName="controller-manager" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757999 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e7ae54-b8ce-4890-9901-514d2f4b7f0a" containerName="controller-manager" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.758007 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758013 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.758022 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758027 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.758037 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758042 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.758050 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758056 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.758062 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758067 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.758074 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758080 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.758089 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758095 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758168 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758190 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758201 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758211 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758218 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758226 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e7ae54-b8ce-4890-9901-514d2f4b7f0a" containerName="controller-manager" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758234 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758554 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.760058 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.762510 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.762825 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.764700 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.765146 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.765874 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.768014 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.772095 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-s6z9t"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.917471 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-config\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.917522 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d96beb-ea1c-44c9-8959-625e6dd22b23-serving-cert\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.917574 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnpsl\" (UniqueName: \"kubernetes.io/projected/89d96beb-ea1c-44c9-8959-625e6dd22b23-kube-api-access-mnpsl\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.917616 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-proxy-ca-bundles\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.917634 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-client-ca\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.019036 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-config\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.019135 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d96beb-ea1c-44c9-8959-625e6dd22b23-serving-cert\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.019254 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnpsl\" (UniqueName: \"kubernetes.io/projected/89d96beb-ea1c-44c9-8959-625e6dd22b23-kube-api-access-mnpsl\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.019300 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-proxy-ca-bundles\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.019331 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-client-ca\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.021207 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-client-ca\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.021745 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-config\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.022268 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-proxy-ca-bundles\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.025801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d96beb-ea1c-44c9-8959-625e6dd22b23-serving-cert\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.052899 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnpsl\" (UniqueName: \"kubernetes.io/projected/89d96beb-ea1c-44c9-8959-625e6dd22b23-kube-api-access-mnpsl\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.071694 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.199814 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" path="/var/lib/kubelet/pods/62bbf001-ce57-471f-ad28-1d892d0d30e9/volumes" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.203045 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e7ae54-b8ce-4890-9901-514d2f4b7f0a" path="/var/lib/kubelet/pods/93e7ae54-b8ce-4890-9901-514d2f4b7f0a/volumes" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.203844 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" path="/var/lib/kubelet/pods/b77a623a-ff2e-45aa-9004-b211b0200a3f/volumes" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.206095 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" path="/var/lib/kubelet/pods/cafe1e82-ef19-4345-825e-cc9bf016b353/volumes" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.207641 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" path="/var/lib/kubelet/pods/ce921030-ec82-420d-a9e7-cd04ee7e055b/volumes" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.209598 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" path="/var/lib/kubelet/pods/e39708f9-5d2d-4ed5-9243-7b71ef470ca7/volumes" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.327969 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-s6z9t"] Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.457460 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fbhgz"] Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.458551 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.462595 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.467711 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbhgz"] Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.526571 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-catalog-content\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.526631 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwjtj\" (UniqueName: \"kubernetes.io/projected/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-kube-api-access-nwjtj\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.526665 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-utilities\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.576356 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" event={"ID":"89d96beb-ea1c-44c9-8959-625e6dd22b23","Type":"ContainerStarted","Data":"52b1f47f7c74eda759908c839be252c964d6d2ae23011adc3820aad79511bb3b"} Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.576405 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" event={"ID":"89d96beb-ea1c-44c9-8959-625e6dd22b23","Type":"ContainerStarted","Data":"8f16b047ffa741d4d75ade3d9bd1041c252050c251d6eb5d34728bf951dc4f26"} Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.576602 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.579643 4932 patch_prober.go:28] interesting pod/controller-manager-8b5db5768-s6z9t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.579773 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" podUID="89d96beb-ea1c-44c9-8959-625e6dd22b23" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.592757 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" podStartSLOduration=2.59274073 podStartE2EDuration="2.59274073s" podCreationTimestamp="2026-02-18 19:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:01.589318196 +0000 UTC m=+365.171273101" watchObservedRunningTime="2026-02-18 19:40:01.59274073 +0000 UTC m=+365.174695575" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.628299 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-catalog-content\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.628459 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwjtj\" (UniqueName: \"kubernetes.io/projected/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-kube-api-access-nwjtj\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.628546 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-utilities\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.629156 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-catalog-content\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.629307 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-utilities\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.664116 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwjtj\" (UniqueName: \"kubernetes.io/projected/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-kube-api-access-nwjtj\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.692416 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mshwj"] Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.693806 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.696889 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.708799 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mshwj"] Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.783845 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.832129 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d67ed032-a807-4d71-9580-3dee5922bc22-utilities\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.832438 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d67ed032-a807-4d71-9580-3dee5922bc22-catalog-content\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.832574 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bc29\" (UniqueName: \"kubernetes.io/projected/d67ed032-a807-4d71-9580-3dee5922bc22-kube-api-access-6bc29\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.933859 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bc29\" (UniqueName: \"kubernetes.io/projected/d67ed032-a807-4d71-9580-3dee5922bc22-kube-api-access-6bc29\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.933943 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d67ed032-a807-4d71-9580-3dee5922bc22-utilities\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.933978 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d67ed032-a807-4d71-9580-3dee5922bc22-catalog-content\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.934409 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d67ed032-a807-4d71-9580-3dee5922bc22-catalog-content\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.934565 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d67ed032-a807-4d71-9580-3dee5922bc22-utilities\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.951188 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bc29\" (UniqueName: \"kubernetes.io/projected/d67ed032-a807-4d71-9580-3dee5922bc22-kube-api-access-6bc29\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.981394 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbhgz"] Feb 18 19:40:01 crc kubenswrapper[4932]: W0218 19:40:01.988398 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82d8d8a1_602e_4738_8f7c_68d5d99c8a08.slice/crio-2f57323ca9956024d8ee49ffa6dafd2889ff940f5c1566bb8a0684bc3a233631 WatchSource:0}: Error finding container 2f57323ca9956024d8ee49ffa6dafd2889ff940f5c1566bb8a0684bc3a233631: Status 404 returned error can't find the container with id 2f57323ca9956024d8ee49ffa6dafd2889ff940f5c1566bb8a0684bc3a233631 Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.006286 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.410948 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mshwj"] Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.602456 4932 generic.go:334] "Generic (PLEG): container finished" podID="82d8d8a1-602e-4738-8f7c-68d5d99c8a08" containerID="d99d3b39f23c2b47699c019d0e906e7002f992cdeffa7929314656dae06f42c4" exitCode=0 Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.602542 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbhgz" event={"ID":"82d8d8a1-602e-4738-8f7c-68d5d99c8a08","Type":"ContainerDied","Data":"d99d3b39f23c2b47699c019d0e906e7002f992cdeffa7929314656dae06f42c4"} Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.602835 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbhgz" event={"ID":"82d8d8a1-602e-4738-8f7c-68d5d99c8a08","Type":"ContainerStarted","Data":"2f57323ca9956024d8ee49ffa6dafd2889ff940f5c1566bb8a0684bc3a233631"} Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.604295 4932 generic.go:334] "Generic (PLEG): container finished" podID="d67ed032-a807-4d71-9580-3dee5922bc22" containerID="9e9d8a40b56a12c6453359140a2dee14ee9f02a8b7b7fce251d94bda397a7d95" exitCode=0 Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.604973 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mshwj" event={"ID":"d67ed032-a807-4d71-9580-3dee5922bc22","Type":"ContainerDied","Data":"9e9d8a40b56a12c6453359140a2dee14ee9f02a8b7b7fce251d94bda397a7d95"} Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.605004 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mshwj" event={"ID":"d67ed032-a807-4d71-9580-3dee5922bc22","Type":"ContainerStarted","Data":"9f4b7662a0b48cc385b312fbcc39ba68b72a4b2d48430de063a54454cf66fb83"} Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.611947 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:03 crc kubenswrapper[4932]: I0218 19:40:03.849286 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6mzhg"] Feb 18 19:40:03 crc kubenswrapper[4932]: I0218 19:40:03.850583 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:03 crc kubenswrapper[4932]: I0218 19:40:03.852385 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:40:03 crc kubenswrapper[4932]: I0218 19:40:03.859855 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6mzhg"] Feb 18 19:40:03 crc kubenswrapper[4932]: I0218 19:40:03.961087 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5z94\" (UniqueName: \"kubernetes.io/projected/f3054517-1735-4758-9f31-1bea7ef3a90f-kube-api-access-d5z94\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:03 crc kubenswrapper[4932]: I0218 19:40:03.961127 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3054517-1735-4758-9f31-1bea7ef3a90f-utilities\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:03 crc kubenswrapper[4932]: I0218 19:40:03.961153 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3054517-1735-4758-9f31-1bea7ef3a90f-catalog-content\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.050009 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2xmq4"] Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.051084 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.055340 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.061953 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2xmq4"] Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.062333 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5z94\" (UniqueName: \"kubernetes.io/projected/f3054517-1735-4758-9f31-1bea7ef3a90f-kube-api-access-d5z94\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.062420 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3054517-1735-4758-9f31-1bea7ef3a90f-utilities\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.062458 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3054517-1735-4758-9f31-1bea7ef3a90f-catalog-content\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.062842 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3054517-1735-4758-9f31-1bea7ef3a90f-catalog-content\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.062943 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3054517-1735-4758-9f31-1bea7ef3a90f-utilities\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.093354 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5z94\" (UniqueName: \"kubernetes.io/projected/f3054517-1735-4758-9f31-1bea7ef3a90f-kube-api-access-d5z94\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.163673 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrrjl\" (UniqueName: \"kubernetes.io/projected/456839f3-9db1-45f2-bef4-c2b272a0f390-kube-api-access-wrrjl\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.163823 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456839f3-9db1-45f2-bef4-c2b272a0f390-utilities\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.163870 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456839f3-9db1-45f2-bef4-c2b272a0f390-catalog-content\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.170400 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.268087 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456839f3-9db1-45f2-bef4-c2b272a0f390-catalog-content\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.268628 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrrjl\" (UniqueName: \"kubernetes.io/projected/456839f3-9db1-45f2-bef4-c2b272a0f390-kube-api-access-wrrjl\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.268740 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456839f3-9db1-45f2-bef4-c2b272a0f390-utilities\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.270793 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456839f3-9db1-45f2-bef4-c2b272a0f390-catalog-content\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.271318 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456839f3-9db1-45f2-bef4-c2b272a0f390-utilities\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.292637 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrrjl\" (UniqueName: \"kubernetes.io/projected/456839f3-9db1-45f2-bef4-c2b272a0f390-kube-api-access-wrrjl\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.365980 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.575722 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6mzhg"] Feb 18 19:40:04 crc kubenswrapper[4932]: W0218 19:40:04.583916 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3054517_1735_4758_9f31_1bea7ef3a90f.slice/crio-99e9bfe583756fb3a29d9519993bd57ebb27c1770663002015a8c1f3ac9cf45a WatchSource:0}: Error finding container 99e9bfe583756fb3a29d9519993bd57ebb27c1770663002015a8c1f3ac9cf45a: Status 404 returned error can't find the container with id 99e9bfe583756fb3a29d9519993bd57ebb27c1770663002015a8c1f3ac9cf45a Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.619591 4932 generic.go:334] "Generic (PLEG): container finished" podID="82d8d8a1-602e-4738-8f7c-68d5d99c8a08" containerID="5fd5e5ff9555cd90d087a8173924811e40b8cc3af67f40693aca6cceca6c0c2f" exitCode=0 Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.620485 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbhgz" event={"ID":"82d8d8a1-602e-4738-8f7c-68d5d99c8a08","Type":"ContainerDied","Data":"5fd5e5ff9555cd90d087a8173924811e40b8cc3af67f40693aca6cceca6c0c2f"} Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.624493 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mzhg" event={"ID":"f3054517-1735-4758-9f31-1bea7ef3a90f","Type":"ContainerStarted","Data":"99e9bfe583756fb3a29d9519993bd57ebb27c1770663002015a8c1f3ac9cf45a"} Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.630267 4932 generic.go:334] "Generic (PLEG): container finished" podID="d67ed032-a807-4d71-9580-3dee5922bc22" containerID="b13d4bf7f51c9adf718207ab5a7e0347aa4ebf14771a2bd87f6ecf36dd9bd765" exitCode=0 Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.630309 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mshwj" event={"ID":"d67ed032-a807-4d71-9580-3dee5922bc22","Type":"ContainerDied","Data":"b13d4bf7f51c9adf718207ab5a7e0347aa4ebf14771a2bd87f6ecf36dd9bd765"} Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.773494 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2xmq4"] Feb 18 19:40:04 crc kubenswrapper[4932]: W0218 19:40:04.812608 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod456839f3_9db1_45f2_bef4_c2b272a0f390.slice/crio-00577a5880bf25c36419884f1850f5db71c4de538054f1d4831060ee48026961 WatchSource:0}: Error finding container 00577a5880bf25c36419884f1850f5db71c4de538054f1d4831060ee48026961: Status 404 returned error can't find the container with id 00577a5880bf25c36419884f1850f5db71c4de538054f1d4831060ee48026961 Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.635734 4932 generic.go:334] "Generic (PLEG): container finished" podID="456839f3-9db1-45f2-bef4-c2b272a0f390" containerID="37ada88bd394eeea79fbc62cb96deb7d09f38d21554e575b9c618962e240315a" exitCode=0 Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.635807 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xmq4" event={"ID":"456839f3-9db1-45f2-bef4-c2b272a0f390","Type":"ContainerDied","Data":"37ada88bd394eeea79fbc62cb96deb7d09f38d21554e575b9c618962e240315a"} Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.636048 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xmq4" event={"ID":"456839f3-9db1-45f2-bef4-c2b272a0f390","Type":"ContainerStarted","Data":"00577a5880bf25c36419884f1850f5db71c4de538054f1d4831060ee48026961"} Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.639154 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mshwj" event={"ID":"d67ed032-a807-4d71-9580-3dee5922bc22","Type":"ContainerStarted","Data":"3705d494829488e5cc341bb9c8716dc343e9837ebf78a1f2536bfda68d93fdf6"} Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.641713 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbhgz" event={"ID":"82d8d8a1-602e-4738-8f7c-68d5d99c8a08","Type":"ContainerStarted","Data":"d243e23d240c6397ccb0eaab8e195190faa489f15ef43ff8e4f256611a078327"} Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.642957 4932 generic.go:334] "Generic (PLEG): container finished" podID="f3054517-1735-4758-9f31-1bea7ef3a90f" containerID="7e8ec96a09d23aa647c947e47365d37a4b7d04937b42f92aa6a00e1d7757fdf2" exitCode=0 Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.642983 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mzhg" event={"ID":"f3054517-1735-4758-9f31-1bea7ef3a90f","Type":"ContainerDied","Data":"7e8ec96a09d23aa647c947e47365d37a4b7d04937b42f92aa6a00e1d7757fdf2"} Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.702500 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mshwj" podStartSLOduration=2.208082834 podStartE2EDuration="4.702482299s" podCreationTimestamp="2026-02-18 19:40:01 +0000 UTC" firstStartedPulling="2026-02-18 19:40:02.605869466 +0000 UTC m=+366.187824331" lastFinishedPulling="2026-02-18 19:40:05.100268911 +0000 UTC m=+368.682223796" observedRunningTime="2026-02-18 19:40:05.679608472 +0000 UTC m=+369.261563317" watchObservedRunningTime="2026-02-18 19:40:05.702482299 +0000 UTC m=+369.284437144" Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.725971 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fbhgz" podStartSLOduration=2.342453497 podStartE2EDuration="4.725953751s" podCreationTimestamp="2026-02-18 19:40:01 +0000 UTC" firstStartedPulling="2026-02-18 19:40:02.605414165 +0000 UTC m=+366.187369040" lastFinishedPulling="2026-02-18 19:40:04.988914449 +0000 UTC m=+368.570869294" observedRunningTime="2026-02-18 19:40:05.705720438 +0000 UTC m=+369.287675283" watchObservedRunningTime="2026-02-18 19:40:05.725953751 +0000 UTC m=+369.307908596" Feb 18 19:40:06 crc kubenswrapper[4932]: I0218 19:40:06.650999 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mzhg" event={"ID":"f3054517-1735-4758-9f31-1bea7ef3a90f","Type":"ContainerStarted","Data":"467703b6550e8102f08da900fae7a0802af8d985d81cea88089cddb03366e6e3"} Feb 18 19:40:06 crc kubenswrapper[4932]: I0218 19:40:06.653918 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xmq4" event={"ID":"456839f3-9db1-45f2-bef4-c2b272a0f390","Type":"ContainerStarted","Data":"a66d421c1b72eb5fbbc3d3c93de3b528bb6aad0daaa1b161e3caa9ec9473bb5b"} Feb 18 19:40:07 crc kubenswrapper[4932]: I0218 19:40:07.665255 4932 generic.go:334] "Generic (PLEG): container finished" podID="f3054517-1735-4758-9f31-1bea7ef3a90f" containerID="467703b6550e8102f08da900fae7a0802af8d985d81cea88089cddb03366e6e3" exitCode=0 Feb 18 19:40:07 crc kubenswrapper[4932]: I0218 19:40:07.665401 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mzhg" event={"ID":"f3054517-1735-4758-9f31-1bea7ef3a90f","Type":"ContainerDied","Data":"467703b6550e8102f08da900fae7a0802af8d985d81cea88089cddb03366e6e3"} Feb 18 19:40:07 crc kubenswrapper[4932]: I0218 19:40:07.671600 4932 generic.go:334] "Generic (PLEG): container finished" podID="456839f3-9db1-45f2-bef4-c2b272a0f390" containerID="a66d421c1b72eb5fbbc3d3c93de3b528bb6aad0daaa1b161e3caa9ec9473bb5b" exitCode=0 Feb 18 19:40:07 crc kubenswrapper[4932]: I0218 19:40:07.671635 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xmq4" event={"ID":"456839f3-9db1-45f2-bef4-c2b272a0f390","Type":"ContainerDied","Data":"a66d421c1b72eb5fbbc3d3c93de3b528bb6aad0daaa1b161e3caa9ec9473bb5b"} Feb 18 19:40:08 crc kubenswrapper[4932]: I0218 19:40:08.681273 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xmq4" event={"ID":"456839f3-9db1-45f2-bef4-c2b272a0f390","Type":"ContainerStarted","Data":"33334224c8879576f443f718e955b7a6f9f37dec1fa73436b9ccf6d9fa42099c"} Feb 18 19:40:08 crc kubenswrapper[4932]: I0218 19:40:08.685905 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mzhg" event={"ID":"f3054517-1735-4758-9f31-1bea7ef3a90f","Type":"ContainerStarted","Data":"62954f9d0b85f235d9b60cd7e44f4be526d4c5e71b3667788c5ffe7f906673ad"} Feb 18 19:40:08 crc kubenswrapper[4932]: I0218 19:40:08.724534 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2xmq4" podStartSLOduration=2.290111821 podStartE2EDuration="4.724500035s" podCreationTimestamp="2026-02-18 19:40:04 +0000 UTC" firstStartedPulling="2026-02-18 19:40:05.637317562 +0000 UTC m=+369.219272427" lastFinishedPulling="2026-02-18 19:40:08.071705796 +0000 UTC m=+371.653660641" observedRunningTime="2026-02-18 19:40:08.704802255 +0000 UTC m=+372.286757130" watchObservedRunningTime="2026-02-18 19:40:08.724500035 +0000 UTC m=+372.306454900" Feb 18 19:40:08 crc kubenswrapper[4932]: I0218 19:40:08.727450 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6mzhg" podStartSLOduration=3.224022042 podStartE2EDuration="5.727432827s" podCreationTimestamp="2026-02-18 19:40:03 +0000 UTC" firstStartedPulling="2026-02-18 19:40:05.644231111 +0000 UTC m=+369.226185946" lastFinishedPulling="2026-02-18 19:40:08.147641886 +0000 UTC m=+371.729596731" observedRunningTime="2026-02-18 19:40:08.722745652 +0000 UTC m=+372.304700507" watchObservedRunningTime="2026-02-18 19:40:08.727432827 +0000 UTC m=+372.309387692" Feb 18 19:40:11 crc kubenswrapper[4932]: I0218 19:40:11.784987 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:11 crc kubenswrapper[4932]: I0218 19:40:11.785375 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:11 crc kubenswrapper[4932]: I0218 19:40:11.835807 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:12 crc kubenswrapper[4932]: I0218 19:40:12.007460 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:12 crc kubenswrapper[4932]: I0218 19:40:12.008633 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:12 crc kubenswrapper[4932]: I0218 19:40:12.065078 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:12 crc kubenswrapper[4932]: I0218 19:40:12.755960 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:12 crc kubenswrapper[4932]: I0218 19:40:12.760309 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:14 crc kubenswrapper[4932]: I0218 19:40:14.171247 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:14 crc kubenswrapper[4932]: I0218 19:40:14.171529 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:14 crc kubenswrapper[4932]: I0218 19:40:14.366404 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:14 crc kubenswrapper[4932]: I0218 19:40:14.366467 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:14 crc kubenswrapper[4932]: I0218 19:40:14.434046 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:14 crc kubenswrapper[4932]: I0218 19:40:14.765345 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:15 crc kubenswrapper[4932]: I0218 19:40:15.257094 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6mzhg" podUID="f3054517-1735-4758-9f31-1bea7ef3a90f" containerName="registry-server" probeResult="failure" output=< Feb 18 19:40:15 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 19:40:15 crc kubenswrapper[4932]: > Feb 18 19:40:24 crc kubenswrapper[4932]: I0218 19:40:24.209012 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:24 crc kubenswrapper[4932]: I0218 19:40:24.279551 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:27 crc kubenswrapper[4932]: I0218 19:40:27.606236 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:40:27 crc kubenswrapper[4932]: I0218 19:40:27.606532 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.607089 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.608330 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.608414 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.610321 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f6c0fd0c3107fc39e9f403b60bf7cadd547322feaa279357c61854210904894"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.610482 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://1f6c0fd0c3107fc39e9f403b60bf7cadd547322feaa279357c61854210904894" gracePeriod=600 Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.993105 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="1f6c0fd0c3107fc39e9f403b60bf7cadd547322feaa279357c61854210904894" exitCode=0 Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.993228 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"1f6c0fd0c3107fc39e9f403b60bf7cadd547322feaa279357c61854210904894"} Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.994061 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"4ae81c5d1f59105a46f72ab1a12573d6a2070dbba30970140847e1ed2a0ce08d"} Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.994133 4932 scope.go:117] "RemoveContainer" containerID="913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e" Feb 18 19:42:57 crc kubenswrapper[4932]: I0218 19:42:57.606231 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:42:57 crc kubenswrapper[4932]: I0218 19:42:57.606930 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:43:27 crc kubenswrapper[4932]: I0218 19:43:27.606808 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:43:27 crc kubenswrapper[4932]: I0218 19:43:27.607511 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:43:57 crc kubenswrapper[4932]: I0218 19:43:57.605739 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:43:57 crc kubenswrapper[4932]: I0218 19:43:57.606402 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:43:57 crc kubenswrapper[4932]: I0218 19:43:57.606458 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:43:57 crc kubenswrapper[4932]: I0218 19:43:57.607227 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ae81c5d1f59105a46f72ab1a12573d6a2070dbba30970140847e1ed2a0ce08d"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:43:57 crc kubenswrapper[4932]: I0218 19:43:57.607298 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://4ae81c5d1f59105a46f72ab1a12573d6a2070dbba30970140847e1ed2a0ce08d" gracePeriod=600 Feb 18 19:43:58 crc kubenswrapper[4932]: I0218 19:43:58.175055 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="4ae81c5d1f59105a46f72ab1a12573d6a2070dbba30970140847e1ed2a0ce08d" exitCode=0 Feb 18 19:43:58 crc kubenswrapper[4932]: I0218 19:43:58.175201 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"4ae81c5d1f59105a46f72ab1a12573d6a2070dbba30970140847e1ed2a0ce08d"} Feb 18 19:43:58 crc kubenswrapper[4932]: I0218 19:43:58.175460 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"f3b543e6ec63bdf78c858f95870e024438d65d986dd0f72b674fc74756af06be"} Feb 18 19:43:58 crc kubenswrapper[4932]: I0218 19:43:58.175497 4932 scope.go:117] "RemoveContainer" containerID="1f6c0fd0c3107fc39e9f403b60bf7cadd547322feaa279357c61854210904894" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.339442 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2rnll"] Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.340558 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.345581 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.350595 4932 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-6jw5c" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.350648 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.355405 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2rnll"] Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.367410 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-9cct4"] Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.368166 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-9cct4" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.368642 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-9cct4"] Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.373920 4932 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-65g9t" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.377699 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-cfzm7"] Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.378553 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.379934 4932 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jrk65" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.399701 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-cfzm7"] Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.530393 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b8fr\" (UniqueName: \"kubernetes.io/projected/fdfef839-bac4-4bdb-bdec-7e5daff1d25a-kube-api-access-8b8fr\") pod \"cert-manager-webhook-687f57d79b-cfzm7\" (UID: \"fdfef839-bac4-4bdb-bdec-7e5daff1d25a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.530487 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwsw6\" (UniqueName: \"kubernetes.io/projected/e536e457-1629-4f37-a5dc-de0facb7639f-kube-api-access-fwsw6\") pod \"cert-manager-cainjector-cf98fcc89-2rnll\" (UID: \"e536e457-1629-4f37-a5dc-de0facb7639f\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.530529 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftl8l\" (UniqueName: \"kubernetes.io/projected/f4644100-28b1-4203-bec6-a1c1605468eb-kube-api-access-ftl8l\") pod \"cert-manager-858654f9db-9cct4\" (UID: \"f4644100-28b1-4203-bec6-a1c1605468eb\") " pod="cert-manager/cert-manager-858654f9db-9cct4" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.631152 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftl8l\" (UniqueName: \"kubernetes.io/projected/f4644100-28b1-4203-bec6-a1c1605468eb-kube-api-access-ftl8l\") pod \"cert-manager-858654f9db-9cct4\" (UID: \"f4644100-28b1-4203-bec6-a1c1605468eb\") " pod="cert-manager/cert-manager-858654f9db-9cct4" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.631253 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b8fr\" (UniqueName: \"kubernetes.io/projected/fdfef839-bac4-4bdb-bdec-7e5daff1d25a-kube-api-access-8b8fr\") pod \"cert-manager-webhook-687f57d79b-cfzm7\" (UID: \"fdfef839-bac4-4bdb-bdec-7e5daff1d25a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.631324 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwsw6\" (UniqueName: \"kubernetes.io/projected/e536e457-1629-4f37-a5dc-de0facb7639f-kube-api-access-fwsw6\") pod \"cert-manager-cainjector-cf98fcc89-2rnll\" (UID: \"e536e457-1629-4f37-a5dc-de0facb7639f\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.650493 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b8fr\" (UniqueName: \"kubernetes.io/projected/fdfef839-bac4-4bdb-bdec-7e5daff1d25a-kube-api-access-8b8fr\") pod \"cert-manager-webhook-687f57d79b-cfzm7\" (UID: \"fdfef839-bac4-4bdb-bdec-7e5daff1d25a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.651924 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftl8l\" (UniqueName: \"kubernetes.io/projected/f4644100-28b1-4203-bec6-a1c1605468eb-kube-api-access-ftl8l\") pod \"cert-manager-858654f9db-9cct4\" (UID: \"f4644100-28b1-4203-bec6-a1c1605468eb\") " pod="cert-manager/cert-manager-858654f9db-9cct4" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.654831 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwsw6\" (UniqueName: \"kubernetes.io/projected/e536e457-1629-4f37-a5dc-de0facb7639f-kube-api-access-fwsw6\") pod \"cert-manager-cainjector-cf98fcc89-2rnll\" (UID: \"e536e457-1629-4f37-a5dc-de0facb7639f\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.662680 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.688199 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-9cct4" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.697851 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.930280 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-9cct4"] Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.936328 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.976026 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-cfzm7"] Feb 18 19:44:56 crc kubenswrapper[4932]: W0218 19:44:56.980360 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdfef839_bac4_4bdb_bdec_7e5daff1d25a.slice/crio-584ada05cd83c8ea8236a8ae5627b0df5f1bbf9f4fc59cd204f1a9c2e2f00921 WatchSource:0}: Error finding container 584ada05cd83c8ea8236a8ae5627b0df5f1bbf9f4fc59cd204f1a9c2e2f00921: Status 404 returned error can't find the container with id 584ada05cd83c8ea8236a8ae5627b0df5f1bbf9f4fc59cd204f1a9c2e2f00921 Feb 18 19:44:57 crc kubenswrapper[4932]: I0218 19:44:57.095454 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2rnll"] Feb 18 19:44:57 crc kubenswrapper[4932]: W0218 19:44:57.102160 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode536e457_1629_4f37_a5dc_de0facb7639f.slice/crio-6f7b4b218d34cda877b5b91da187d549579e73229230712e274e2d5c8a97d765 WatchSource:0}: Error finding container 6f7b4b218d34cda877b5b91da187d549579e73229230712e274e2d5c8a97d765: Status 404 returned error can't find the container with id 6f7b4b218d34cda877b5b91da187d549579e73229230712e274e2d5c8a97d765 Feb 18 19:44:57 crc kubenswrapper[4932]: I0218 19:44:57.537417 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" event={"ID":"fdfef839-bac4-4bdb-bdec-7e5daff1d25a","Type":"ContainerStarted","Data":"584ada05cd83c8ea8236a8ae5627b0df5f1bbf9f4fc59cd204f1a9c2e2f00921"} Feb 18 19:44:57 crc kubenswrapper[4932]: I0218 19:44:57.538438 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-9cct4" event={"ID":"f4644100-28b1-4203-bec6-a1c1605468eb","Type":"ContainerStarted","Data":"5ececa4df3d4663f238180c59c0fe70826463a7fdecfc4b797d81c2fcc339ca5"} Feb 18 19:44:57 crc kubenswrapper[4932]: I0218 19:44:57.539257 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" event={"ID":"e536e457-1629-4f37-a5dc-de0facb7639f","Type":"ContainerStarted","Data":"6f7b4b218d34cda877b5b91da187d549579e73229230712e274e2d5c8a97d765"} Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.163959 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz"] Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.165034 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.167521 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.167721 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.176626 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz"] Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.182264 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84719922-9618-4293-8f4a-fb525f37eca6-config-volume\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.182410 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84719922-9618-4293-8f4a-fb525f37eca6-secret-volume\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.182469 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfnzf\" (UniqueName: \"kubernetes.io/projected/84719922-9618-4293-8f4a-fb525f37eca6-kube-api-access-cfnzf\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.284599 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84719922-9618-4293-8f4a-fb525f37eca6-config-volume\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.284665 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84719922-9618-4293-8f4a-fb525f37eca6-secret-volume\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.284688 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfnzf\" (UniqueName: \"kubernetes.io/projected/84719922-9618-4293-8f4a-fb525f37eca6-kube-api-access-cfnzf\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.299715 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84719922-9618-4293-8f4a-fb525f37eca6-config-volume\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.306483 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84719922-9618-4293-8f4a-fb525f37eca6-secret-volume\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.320581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfnzf\" (UniqueName: \"kubernetes.io/projected/84719922-9618-4293-8f4a-fb525f37eca6-kube-api-access-cfnzf\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.492538 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.193912 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz"] Feb 18 19:45:01 crc kubenswrapper[4932]: W0218 19:45:01.200949 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84719922_9618_4293_8f4a_fb525f37eca6.slice/crio-a253c63869ae1908974f0d3775039f12898abfd45ec3648d665b720963bf391a WatchSource:0}: Error finding container a253c63869ae1908974f0d3775039f12898abfd45ec3648d665b720963bf391a: Status 404 returned error can't find the container with id a253c63869ae1908974f0d3775039f12898abfd45ec3648d665b720963bf391a Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.567269 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" event={"ID":"e536e457-1629-4f37-a5dc-de0facb7639f","Type":"ContainerStarted","Data":"a542a8caa59a6958ad3c9f7e345775e7e2bfdaf60a51c058177c52b819823e86"} Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.568389 4932 generic.go:334] "Generic (PLEG): container finished" podID="84719922-9618-4293-8f4a-fb525f37eca6" containerID="80752bb80b5cb6dad23a49c747590ff84b2c23ef678e45c05c4cf091b2c9b0a9" exitCode=0 Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.568477 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" event={"ID":"84719922-9618-4293-8f4a-fb525f37eca6","Type":"ContainerDied","Data":"80752bb80b5cb6dad23a49c747590ff84b2c23ef678e45c05c4cf091b2c9b0a9"} Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.568554 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" event={"ID":"84719922-9618-4293-8f4a-fb525f37eca6","Type":"ContainerStarted","Data":"a253c63869ae1908974f0d3775039f12898abfd45ec3648d665b720963bf391a"} Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.569392 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" event={"ID":"fdfef839-bac4-4bdb-bdec-7e5daff1d25a","Type":"ContainerStarted","Data":"0ce85b450cd0c5f0d92d9f4043eb828366f60b3434efea16a52d65b5b5a104df"} Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.569501 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.570421 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-9cct4" event={"ID":"f4644100-28b1-4203-bec6-a1c1605468eb","Type":"ContainerStarted","Data":"011a20a1a8376f6ae34b0595328b637ea52712ecf246045b35b3370efdd352bb"} Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.592371 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" podStartSLOduration=1.894693077 podStartE2EDuration="5.592344006s" podCreationTimestamp="2026-02-18 19:44:56 +0000 UTC" firstStartedPulling="2026-02-18 19:44:57.104093848 +0000 UTC m=+660.686048703" lastFinishedPulling="2026-02-18 19:45:00.801744787 +0000 UTC m=+664.383699632" observedRunningTime="2026-02-18 19:45:01.580250837 +0000 UTC m=+665.162205692" watchObservedRunningTime="2026-02-18 19:45:01.592344006 +0000 UTC m=+665.174298891" Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.603337 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-9cct4" podStartSLOduration=1.69179353 podStartE2EDuration="5.603306016s" podCreationTimestamp="2026-02-18 19:44:56 +0000 UTC" firstStartedPulling="2026-02-18 19:44:56.936063472 +0000 UTC m=+660.518018317" lastFinishedPulling="2026-02-18 19:45:00.847575958 +0000 UTC m=+664.429530803" observedRunningTime="2026-02-18 19:45:01.597280797 +0000 UTC m=+665.179235712" watchObservedRunningTime="2026-02-18 19:45:01.603306016 +0000 UTC m=+665.185260911" Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.623863 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" podStartSLOduration=1.799602521 podStartE2EDuration="5.623833963s" podCreationTimestamp="2026-02-18 19:44:56 +0000 UTC" firstStartedPulling="2026-02-18 19:44:56.982057617 +0000 UTC m=+660.564012452" lastFinishedPulling="2026-02-18 19:45:00.806289059 +0000 UTC m=+664.388243894" observedRunningTime="2026-02-18 19:45:01.618190113 +0000 UTC m=+665.200144958" watchObservedRunningTime="2026-02-18 19:45:01.623833963 +0000 UTC m=+665.205788808" Feb 18 19:45:02 crc kubenswrapper[4932]: I0218 19:45:02.870745 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:02 crc kubenswrapper[4932]: I0218 19:45:02.916867 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfnzf\" (UniqueName: \"kubernetes.io/projected/84719922-9618-4293-8f4a-fb525f37eca6-kube-api-access-cfnzf\") pod \"84719922-9618-4293-8f4a-fb525f37eca6\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " Feb 18 19:45:02 crc kubenswrapper[4932]: I0218 19:45:02.916989 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84719922-9618-4293-8f4a-fb525f37eca6-secret-volume\") pod \"84719922-9618-4293-8f4a-fb525f37eca6\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " Feb 18 19:45:02 crc kubenswrapper[4932]: I0218 19:45:02.917048 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84719922-9618-4293-8f4a-fb525f37eca6-config-volume\") pod \"84719922-9618-4293-8f4a-fb525f37eca6\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " Feb 18 19:45:02 crc kubenswrapper[4932]: I0218 19:45:02.918060 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84719922-9618-4293-8f4a-fb525f37eca6-config-volume" (OuterVolumeSpecName: "config-volume") pod "84719922-9618-4293-8f4a-fb525f37eca6" (UID: "84719922-9618-4293-8f4a-fb525f37eca6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:45:02 crc kubenswrapper[4932]: I0218 19:45:02.922908 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84719922-9618-4293-8f4a-fb525f37eca6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "84719922-9618-4293-8f4a-fb525f37eca6" (UID: "84719922-9618-4293-8f4a-fb525f37eca6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:45:02 crc kubenswrapper[4932]: I0218 19:45:02.922968 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84719922-9618-4293-8f4a-fb525f37eca6-kube-api-access-cfnzf" (OuterVolumeSpecName: "kube-api-access-cfnzf") pod "84719922-9618-4293-8f4a-fb525f37eca6" (UID: "84719922-9618-4293-8f4a-fb525f37eca6"). InnerVolumeSpecName "kube-api-access-cfnzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[4932]: I0218 19:45:03.019556 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84719922-9618-4293-8f4a-fb525f37eca6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4932]: I0218 19:45:03.019599 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84719922-9618-4293-8f4a-fb525f37eca6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4932]: I0218 19:45:03.019612 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfnzf\" (UniqueName: \"kubernetes.io/projected/84719922-9618-4293-8f4a-fb525f37eca6-kube-api-access-cfnzf\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4932]: I0218 19:45:03.583515 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" event={"ID":"84719922-9618-4293-8f4a-fb525f37eca6","Type":"ContainerDied","Data":"a253c63869ae1908974f0d3775039f12898abfd45ec3648d665b720963bf391a"} Feb 18 19:45:03 crc kubenswrapper[4932]: I0218 19:45:03.583789 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a253c63869ae1908974f0d3775039f12898abfd45ec3648d665b720963bf391a" Feb 18 19:45:03 crc kubenswrapper[4932]: I0218 19:45:03.583591 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.492562 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hbqb5"] Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.495991 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-controller" containerID="cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.496091 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="nbdb" containerID="cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.496108 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.496200 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="northd" containerID="cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.496247 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="sbdb" containerID="cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.496282 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-acl-logging" containerID="cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.496281 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-node" containerID="cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.530638 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" containerID="cri-o://bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.603197 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/2.log" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.603697 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/1.log" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.603743 4932 generic.go:334] "Generic (PLEG): container finished" podID="1b8d80e2-307e-43b6-9003-e77eef51e084" containerID="abaae01c3d1488753c134b713c5ac61b4207745b6a2dc1624d7639c5e6d2387b" exitCode=2 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.603786 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerDied","Data":"abaae01c3d1488753c134b713c5ac61b4207745b6a2dc1624d7639c5e6d2387b"} Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.603837 4932 scope.go:117] "RemoveContainer" containerID="3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.604588 4932 scope.go:117] "RemoveContainer" containerID="abaae01c3d1488753c134b713c5ac61b4207745b6a2dc1624d7639c5e6d2387b" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.604829 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-sj8bg_openshift-multus(1b8d80e2-307e-43b6-9003-e77eef51e084)\"" pod="openshift-multus/multus-sj8bg" podUID="1b8d80e2-307e-43b6-9003-e77eef51e084" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.700708 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.840690 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/3.log" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.842716 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovn-acl-logging/0.log" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.843199 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovn-controller/0.log" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.843542 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875241 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-systemd\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875306 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-var-lib-openvswitch\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875377 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-config\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875399 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-kubelet\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875561 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-ovn-kubernetes\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875684 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-openvswitch\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875705 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-script-lib\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875723 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-var-lib-cni-networks-ovn-kubernetes\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875749 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-slash\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875980 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876027 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876051 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876299 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21e3c087-c564-4f66-a656-c92a4e47fa72-ovn-node-metrics-cert\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876287 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876340 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876318 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-systemd-units\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876367 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-ovn\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876382 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876401 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-node-log\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876754 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-log-socket\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876779 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-bin\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876932 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-netd\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876457 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876480 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-slash" (OuterVolumeSpecName: "host-slash") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876501 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876520 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-node-log" (OuterVolumeSpecName: "node-log") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877027 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-log-socket" (OuterVolumeSpecName: "log-socket") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877081 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnfjd\" (UniqueName: \"kubernetes.io/projected/21e3c087-c564-4f66-a656-c92a4e47fa72-kube-api-access-xnfjd\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877085 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877110 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-netns\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877289 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-env-overrides\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877425 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-etc-openvswitch\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877127 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877369 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877533 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878091 4932 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878106 4932 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878114 4932 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878123 4932 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878131 4932 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878140 4932 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878149 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878157 4932 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878166 4932 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878195 4932 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878203 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878212 4932 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878221 4932 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878230 4932 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878238 4932 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878262 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878371 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.881463 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e3c087-c564-4f66-a656-c92a4e47fa72-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.883431 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e3c087-c564-4f66-a656-c92a4e47fa72-kube-api-access-xnfjd" (OuterVolumeSpecName: "kube-api-access-xnfjd") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "kube-api-access-xnfjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.895250 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.909634 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-brc6b"] Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.909968 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-node" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.909987 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-node" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910002 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kubecfg-setup" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910014 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kubecfg-setup" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910025 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910034 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910045 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="sbdb" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910053 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="sbdb" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910061 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910069 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910080 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910088 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910099 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-acl-logging" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910110 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-acl-logging" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910157 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="northd" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910167 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="northd" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910192 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910200 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910210 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910218 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910234 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84719922-9618-4293-8f4a-fb525f37eca6" containerName="collect-profiles" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910245 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="84719922-9618-4293-8f4a-fb525f37eca6" containerName="collect-profiles" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910257 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910266 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910280 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="nbdb" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910288 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="nbdb" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910454 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="northd" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910470 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910481 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="84719922-9618-4293-8f4a-fb525f37eca6" containerName="collect-profiles" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910489 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910500 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-node" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910510 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910520 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910532 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="nbdb" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910543 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910554 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="sbdb" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910565 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-acl-logging" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910576 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910702 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910711 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910847 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.913828 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.979690 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-var-lib-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.979771 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d36c53b-01fa-4726-b231-08718883716e-ovn-node-metrics-cert\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.979808 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-cni-netd\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.979849 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-node-log\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.979891 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-run-netns\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.979977 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-cni-bin\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980010 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980045 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-log-socket\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980072 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-ovnkube-config\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980107 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-etc-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980148 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-env-overrides\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980207 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-run-ovn-kubernetes\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980239 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-ovnkube-script-lib\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980282 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-ovn\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980314 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-kubelet\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980344 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-slash\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980375 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v6q4\" (UniqueName: \"kubernetes.io/projected/1d36c53b-01fa-4726-b231-08718883716e-kube-api-access-7v6q4\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980493 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-systemd-units\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980634 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980656 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-systemd\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980709 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnfjd\" (UniqueName: \"kubernetes.io/projected/21e3c087-c564-4f66-a656-c92a4e47fa72-kube-api-access-xnfjd\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980724 4932 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980733 4932 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980742 4932 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980751 4932 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21e3c087-c564-4f66-a656-c92a4e47fa72-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082517 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-systemd\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082572 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-var-lib-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082596 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d36c53b-01fa-4726-b231-08718883716e-ovn-node-metrics-cert\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082620 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-cni-netd\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082641 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-node-log\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082682 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-run-netns\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082703 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-var-lib-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082755 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-cni-bin\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082726 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-cni-bin\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082778 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-node-log\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082798 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-systemd\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082837 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-run-netns\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082884 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-log-socket\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082750 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-cni-netd\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082847 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-log-socket\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.083082 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.083130 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.083239 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-ovnkube-config\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084316 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-etc-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084417 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-env-overrides\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084450 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-etc-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084470 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-run-ovn-kubernetes\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084507 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-ovnkube-script-lib\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084594 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-ovnkube-config\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084612 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-run-ovn-kubernetes\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084614 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-ovn\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084665 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-ovn\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084723 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-kubelet\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084807 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-kubelet\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084810 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-slash\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084845 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v6q4\" (UniqueName: \"kubernetes.io/projected/1d36c53b-01fa-4726-b231-08718883716e-kube-api-access-7v6q4\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084848 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-slash\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084864 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-systemd-units\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084883 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084931 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084954 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-systemd-units\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.085396 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-env-overrides\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.086218 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-ovnkube-script-lib\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.088159 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d36c53b-01fa-4726-b231-08718883716e-ovn-node-metrics-cert\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.107587 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v6q4\" (UniqueName: \"kubernetes.io/projected/1d36c53b-01fa-4726-b231-08718883716e-kube-api-access-7v6q4\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.239281 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.611703 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/3.log" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.614786 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovn-acl-logging/0.log" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615307 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovn-controller/0.log" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615657 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" exitCode=0 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615684 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" exitCode=0 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615696 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" exitCode=0 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615708 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" exitCode=0 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615722 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" exitCode=0 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615731 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" exitCode=0 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615734 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615739 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" exitCode=143 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615749 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" exitCode=143 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615693 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615817 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615833 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615846 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615858 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615870 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615882 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615897 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615904 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615911 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615918 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615925 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615932 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615939 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615947 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615956 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615968 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615975 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615982 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615988 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615995 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615910 4932 scope.go:117] "RemoveContainer" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.616001 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617003 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617022 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617033 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617056 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617076 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617100 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617112 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617121 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617131 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617141 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617150 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617159 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617202 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617214 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617223 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617238 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"1becd3aaad487cc81da8ef3a1202626206425a186289801483cd534c986b4c0d"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617444 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617462 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617475 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617486 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617497 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617506 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617520 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617532 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617542 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617552 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.630141 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/2.log" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.637465 4932 generic.go:334] "Generic (PLEG): container finished" podID="1d36c53b-01fa-4726-b231-08718883716e" containerID="b5ba5acb239bcc85d6fb900e2f5d011076f80bb44198b0898b04a4b5cf088411" exitCode=0 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.637514 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerDied","Data":"b5ba5acb239bcc85d6fb900e2f5d011076f80bb44198b0898b04a4b5cf088411"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.637572 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"bba935a9383fdf705b50a62b00c562a7cd962d6da05559ee959a51002db4363b"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.666038 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.691334 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hbqb5"] Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.693487 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hbqb5"] Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.713245 4932 scope.go:117] "RemoveContainer" containerID="fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.745320 4932 scope.go:117] "RemoveContainer" containerID="6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.762827 4932 scope.go:117] "RemoveContainer" containerID="4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.786731 4932 scope.go:117] "RemoveContainer" containerID="6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.803791 4932 scope.go:117] "RemoveContainer" containerID="cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.819845 4932 scope.go:117] "RemoveContainer" containerID="2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.837619 4932 scope.go:117] "RemoveContainer" containerID="58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.863288 4932 scope.go:117] "RemoveContainer" containerID="4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.938213 4932 scope.go:117] "RemoveContainer" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.938837 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": container with ID starting with bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4 not found: ID does not exist" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.938889 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} err="failed to get container status \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": rpc error: code = NotFound desc = could not find container \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": container with ID starting with bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.938920 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.939262 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": container with ID starting with d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf not found: ID does not exist" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.939320 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} err="failed to get container status \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": rpc error: code = NotFound desc = could not find container \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": container with ID starting with d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.939356 4932 scope.go:117] "RemoveContainer" containerID="fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.939680 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": container with ID starting with fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f not found: ID does not exist" containerID="fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.939710 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} err="failed to get container status \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": rpc error: code = NotFound desc = could not find container \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": container with ID starting with fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.939728 4932 scope.go:117] "RemoveContainer" containerID="6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.940009 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": container with ID starting with 6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf not found: ID does not exist" containerID="6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.940042 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} err="failed to get container status \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": rpc error: code = NotFound desc = could not find container \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": container with ID starting with 6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.940062 4932 scope.go:117] "RemoveContainer" containerID="4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.940637 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": container with ID starting with 4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2 not found: ID does not exist" containerID="4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.940684 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} err="failed to get container status \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": rpc error: code = NotFound desc = could not find container \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": container with ID starting with 4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.940713 4932 scope.go:117] "RemoveContainer" containerID="6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.941034 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": container with ID starting with 6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0 not found: ID does not exist" containerID="6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.941062 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} err="failed to get container status \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": rpc error: code = NotFound desc = could not find container \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": container with ID starting with 6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.941079 4932 scope.go:117] "RemoveContainer" containerID="cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.941437 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": container with ID starting with cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06 not found: ID does not exist" containerID="cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.941474 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} err="failed to get container status \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": rpc error: code = NotFound desc = could not find container \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": container with ID starting with cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.941495 4932 scope.go:117] "RemoveContainer" containerID="2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.941764 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": container with ID starting with 2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5 not found: ID does not exist" containerID="2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.941787 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} err="failed to get container status \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": rpc error: code = NotFound desc = could not find container \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": container with ID starting with 2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.941804 4932 scope.go:117] "RemoveContainer" containerID="58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.942615 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": container with ID starting with 58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545 not found: ID does not exist" containerID="58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.942644 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} err="failed to get container status \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": rpc error: code = NotFound desc = could not find container \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": container with ID starting with 58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.942659 4932 scope.go:117] "RemoveContainer" containerID="4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.943454 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": container with ID starting with 4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159 not found: ID does not exist" containerID="4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.943495 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} err="failed to get container status \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": rpc error: code = NotFound desc = could not find container \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": container with ID starting with 4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.943522 4932 scope.go:117] "RemoveContainer" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.943815 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} err="failed to get container status \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": rpc error: code = NotFound desc = could not find container \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": container with ID starting with bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.943837 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944042 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} err="failed to get container status \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": rpc error: code = NotFound desc = could not find container \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": container with ID starting with d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944064 4932 scope.go:117] "RemoveContainer" containerID="fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944277 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} err="failed to get container status \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": rpc error: code = NotFound desc = could not find container \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": container with ID starting with fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944301 4932 scope.go:117] "RemoveContainer" containerID="6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944572 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} err="failed to get container status \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": rpc error: code = NotFound desc = could not find container \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": container with ID starting with 6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944618 4932 scope.go:117] "RemoveContainer" containerID="4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944885 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} err="failed to get container status \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": rpc error: code = NotFound desc = could not find container \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": container with ID starting with 4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944922 4932 scope.go:117] "RemoveContainer" containerID="6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.945254 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} err="failed to get container status \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": rpc error: code = NotFound desc = could not find container \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": container with ID starting with 6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.945287 4932 scope.go:117] "RemoveContainer" containerID="cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.945914 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} err="failed to get container status \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": rpc error: code = NotFound desc = could not find container \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": container with ID starting with cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.945951 4932 scope.go:117] "RemoveContainer" containerID="2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.946334 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} err="failed to get container status \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": rpc error: code = NotFound desc = could not find container \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": container with ID starting with 2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.946363 4932 scope.go:117] "RemoveContainer" containerID="58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.946674 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} err="failed to get container status \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": rpc error: code = NotFound desc = could not find container \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": container with ID starting with 58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.946701 4932 scope.go:117] "RemoveContainer" containerID="4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.946965 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} err="failed to get container status \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": rpc error: code = NotFound desc = could not find container \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": container with ID starting with 4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.947012 4932 scope.go:117] "RemoveContainer" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.947358 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} err="failed to get container status \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": rpc error: code = NotFound desc = could not find container \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": container with ID starting with bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.947383 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.947707 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} err="failed to get container status \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": rpc error: code = NotFound desc = could not find container \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": container with ID starting with d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.947728 4932 scope.go:117] "RemoveContainer" containerID="fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.947980 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} err="failed to get container status \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": rpc error: code = NotFound desc = could not find container \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": container with ID starting with fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.948001 4932 scope.go:117] "RemoveContainer" containerID="6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.948325 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} err="failed to get container status \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": rpc error: code = NotFound desc = could not find container \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": container with ID starting with 6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.948361 4932 scope.go:117] "RemoveContainer" containerID="4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.948673 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} err="failed to get container status \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": rpc error: code = NotFound desc = could not find container \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": container with ID starting with 4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.948692 4932 scope.go:117] "RemoveContainer" containerID="6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.949280 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} err="failed to get container status \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": rpc error: code = NotFound desc = could not find container \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": container with ID starting with 6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.949305 4932 scope.go:117] "RemoveContainer" containerID="cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.949557 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} err="failed to get container status \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": rpc error: code = NotFound desc = could not find container \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": container with ID starting with cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.949581 4932 scope.go:117] "RemoveContainer" containerID="2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.949964 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} err="failed to get container status \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": rpc error: code = NotFound desc = could not find container \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": container with ID starting with 2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.950018 4932 scope.go:117] "RemoveContainer" containerID="58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.950267 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} err="failed to get container status \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": rpc error: code = NotFound desc = could not find container \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": container with ID starting with 58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.950303 4932 scope.go:117] "RemoveContainer" containerID="4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.950695 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} err="failed to get container status \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": rpc error: code = NotFound desc = could not find container \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": container with ID starting with 4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.950729 4932 scope.go:117] "RemoveContainer" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.950951 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} err="failed to get container status \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": rpc error: code = NotFound desc = could not find container \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": container with ID starting with bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.950981 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.951237 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} err="failed to get container status \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": rpc error: code = NotFound desc = could not find container \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": container with ID starting with d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.951259 4932 scope.go:117] "RemoveContainer" containerID="fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.951523 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} err="failed to get container status \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": rpc error: code = NotFound desc = could not find container \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": container with ID starting with fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.951540 4932 scope.go:117] "RemoveContainer" containerID="6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.951782 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} err="failed to get container status \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": rpc error: code = NotFound desc = could not find container \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": container with ID starting with 6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.951823 4932 scope.go:117] "RemoveContainer" containerID="4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952063 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} err="failed to get container status \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": rpc error: code = NotFound desc = could not find container \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": container with ID starting with 4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952100 4932 scope.go:117] "RemoveContainer" containerID="6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952371 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} err="failed to get container status \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": rpc error: code = NotFound desc = could not find container \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": container with ID starting with 6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952392 4932 scope.go:117] "RemoveContainer" containerID="cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952598 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} err="failed to get container status \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": rpc error: code = NotFound desc = could not find container \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": container with ID starting with cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952636 4932 scope.go:117] "RemoveContainer" containerID="2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952932 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} err="failed to get container status \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": rpc error: code = NotFound desc = could not find container \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": container with ID starting with 2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952956 4932 scope.go:117] "RemoveContainer" containerID="58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.953241 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} err="failed to get container status \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": rpc error: code = NotFound desc = could not find container \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": container with ID starting with 58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.953278 4932 scope.go:117] "RemoveContainer" containerID="4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.953563 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} err="failed to get container status \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": rpc error: code = NotFound desc = could not find container \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": container with ID starting with 4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.953585 4932 scope.go:117] "RemoveContainer" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.953844 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} err="failed to get container status \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": rpc error: code = NotFound desc = could not find container \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": container with ID starting with bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4 not found: ID does not exist" Feb 18 19:45:08 crc kubenswrapper[4932]: I0218 19:45:08.647277 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"6d2d9f18453112b56de9ed1ac2ac35c8901b1fa2bfe02601b889488d7438840b"} Feb 18 19:45:08 crc kubenswrapper[4932]: I0218 19:45:08.647636 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"0b4b480687787ca2a9a3cc73678a7b092ef5d087e688486de2b4af956b932c47"} Feb 18 19:45:08 crc kubenswrapper[4932]: I0218 19:45:08.647652 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"6583c4490500dca1d49d0026d96ac215560cc1ee103dbd68360d021fd04deeda"} Feb 18 19:45:08 crc kubenswrapper[4932]: I0218 19:45:08.647665 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"5d52d4499a1a0e3056b223da93998d7ffa8e218e668c74677b23d239b090e9f9"} Feb 18 19:45:08 crc kubenswrapper[4932]: I0218 19:45:08.647678 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"a06daccd02c3777381f4b576765d08dd3199be82f57c11c85cb1cf79fe779102"} Feb 18 19:45:08 crc kubenswrapper[4932]: I0218 19:45:08.647690 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"f592cd19b3958909e2d6b6e71f37dd57cb55b713aa5f3f3df488337b71de5db5"} Feb 18 19:45:09 crc kubenswrapper[4932]: I0218 19:45:09.190581 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" path="/var/lib/kubelet/pods/21e3c087-c564-4f66-a656-c92a4e47fa72/volumes" Feb 18 19:45:11 crc kubenswrapper[4932]: I0218 19:45:11.676430 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"c425b4efb38aa2efc9d9629627f2702df758ae1b94d94669a11f71bfe0306d1f"} Feb 18 19:45:13 crc kubenswrapper[4932]: I0218 19:45:13.690974 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"28204a4c57b4333e8d24a0d941810e9585be3f9271c393cb8887b9724018aae0"} Feb 18 19:45:13 crc kubenswrapper[4932]: I0218 19:45:13.691587 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:13 crc kubenswrapper[4932]: I0218 19:45:13.691602 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:13 crc kubenswrapper[4932]: I0218 19:45:13.746774 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" podStartSLOduration=7.7467580940000005 podStartE2EDuration="7.746758094s" podCreationTimestamp="2026-02-18 19:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:45:13.742981791 +0000 UTC m=+677.324936646" watchObservedRunningTime="2026-02-18 19:45:13.746758094 +0000 UTC m=+677.328712939" Feb 18 19:45:13 crc kubenswrapper[4932]: I0218 19:45:13.761694 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:14 crc kubenswrapper[4932]: I0218 19:45:14.698008 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:14 crc kubenswrapper[4932]: I0218 19:45:14.741021 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:21 crc kubenswrapper[4932]: I0218 19:45:21.180261 4932 scope.go:117] "RemoveContainer" containerID="abaae01c3d1488753c134b713c5ac61b4207745b6a2dc1624d7639c5e6d2387b" Feb 18 19:45:21 crc kubenswrapper[4932]: E0218 19:45:21.181330 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-sj8bg_openshift-multus(1b8d80e2-307e-43b6-9003-e77eef51e084)\"" pod="openshift-multus/multus-sj8bg" podUID="1b8d80e2-307e-43b6-9003-e77eef51e084" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.158421 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm"] Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.162043 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.164824 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.170385 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm"] Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.183609 4932 scope.go:117] "RemoveContainer" containerID="abaae01c3d1488753c134b713c5ac61b4207745b6a2dc1624d7639c5e6d2387b" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.342674 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2pcw\" (UniqueName: \"kubernetes.io/projected/5f20105a-5425-4620-98f5-8a6ea6dce405-kube-api-access-c2pcw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.342953 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.342976 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.444372 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2pcw\" (UniqueName: \"kubernetes.io/projected/5f20105a-5425-4620-98f5-8a6ea6dce405-kube-api-access-c2pcw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.444451 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.444483 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.444879 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.444950 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.485621 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2pcw\" (UniqueName: \"kubernetes.io/projected/5f20105a-5425-4620-98f5-8a6ea6dce405-kube-api-access-c2pcw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.492221 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.529810 4932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(2a78f79085852c40164ec8de2d46f6d902cdaff85b5dd399ad6a2550d56d3e7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.529885 4932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(2a78f79085852c40164ec8de2d46f6d902cdaff85b5dd399ad6a2550d56d3e7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.529912 4932 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(2a78f79085852c40164ec8de2d46f6d902cdaff85b5dd399ad6a2550d56d3e7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.529960 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace(5f20105a-5425-4620-98f5-8a6ea6dce405)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace(5f20105a-5425-4620-98f5-8a6ea6dce405)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(2a78f79085852c40164ec8de2d46f6d902cdaff85b5dd399ad6a2550d56d3e7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.837482 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/2.log" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.837582 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.837585 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerStarted","Data":"6db89007d836797b0d9c8ef0e092b6c971e31acd8912299653558fc5ddef1d9f"} Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.839001 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.881520 4932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(7529c016401fdeee0e514313fca1a3a67c35cd628dbc9814acd614ecc8616fe3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.881595 4932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(7529c016401fdeee0e514313fca1a3a67c35cd628dbc9814acd614ecc8616fe3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.881622 4932 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(7529c016401fdeee0e514313fca1a3a67c35cd628dbc9814acd614ecc8616fe3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.881710 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace(5f20105a-5425-4620-98f5-8a6ea6dce405)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace(5f20105a-5425-4620-98f5-8a6ea6dce405)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(7529c016401fdeee0e514313fca1a3a67c35cd628dbc9814acd614ecc8616fe3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" Feb 18 19:45:37 crc kubenswrapper[4932]: I0218 19:45:37.274449 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:47 crc kubenswrapper[4932]: I0218 19:45:47.178427 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:47 crc kubenswrapper[4932]: I0218 19:45:47.183588 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:47 crc kubenswrapper[4932]: I0218 19:45:47.585326 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm"] Feb 18 19:45:47 crc kubenswrapper[4932]: W0218 19:45:47.595282 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f20105a_5425_4620_98f5_8a6ea6dce405.slice/crio-8b8444fb77144c09600fe22cfad18194e50dce16534f6a1667c73cf801c08376 WatchSource:0}: Error finding container 8b8444fb77144c09600fe22cfad18194e50dce16534f6a1667c73cf801c08376: Status 404 returned error can't find the container with id 8b8444fb77144c09600fe22cfad18194e50dce16534f6a1667c73cf801c08376 Feb 18 19:45:47 crc kubenswrapper[4932]: I0218 19:45:47.917393 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" event={"ID":"5f20105a-5425-4620-98f5-8a6ea6dce405","Type":"ContainerStarted","Data":"82a8a4080e30730e579c6c585384167b1f2e31f8bf694dbe0b1b55bfb268013a"} Feb 18 19:45:47 crc kubenswrapper[4932]: I0218 19:45:47.917462 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" event={"ID":"5f20105a-5425-4620-98f5-8a6ea6dce405","Type":"ContainerStarted","Data":"8b8444fb77144c09600fe22cfad18194e50dce16534f6a1667c73cf801c08376"} Feb 18 19:45:48 crc kubenswrapper[4932]: I0218 19:45:48.926748 4932 generic.go:334] "Generic (PLEG): container finished" podID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerID="82a8a4080e30730e579c6c585384167b1f2e31f8bf694dbe0b1b55bfb268013a" exitCode=0 Feb 18 19:45:48 crc kubenswrapper[4932]: I0218 19:45:48.926787 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" event={"ID":"5f20105a-5425-4620-98f5-8a6ea6dce405","Type":"ContainerDied","Data":"82a8a4080e30730e579c6c585384167b1f2e31f8bf694dbe0b1b55bfb268013a"} Feb 18 19:45:50 crc kubenswrapper[4932]: I0218 19:45:50.949499 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" event={"ID":"5f20105a-5425-4620-98f5-8a6ea6dce405","Type":"ContainerStarted","Data":"8722dd9b254c200ea891e7e29606c1ea6053477495d75e7f6c622013659a39e9"} Feb 18 19:45:51 crc kubenswrapper[4932]: I0218 19:45:51.967229 4932 generic.go:334] "Generic (PLEG): container finished" podID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerID="8722dd9b254c200ea891e7e29606c1ea6053477495d75e7f6c622013659a39e9" exitCode=0 Feb 18 19:45:51 crc kubenswrapper[4932]: I0218 19:45:51.967314 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" event={"ID":"5f20105a-5425-4620-98f5-8a6ea6dce405","Type":"ContainerDied","Data":"8722dd9b254c200ea891e7e29606c1ea6053477495d75e7f6c622013659a39e9"} Feb 18 19:45:52 crc kubenswrapper[4932]: I0218 19:45:52.977446 4932 generic.go:334] "Generic (PLEG): container finished" podID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerID="e008405b46c42a7fdfb31d3b35fa53f40c7098db97e1e6146ebd3aa20f18820e" exitCode=0 Feb 18 19:45:52 crc kubenswrapper[4932]: I0218 19:45:52.977518 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" event={"ID":"5f20105a-5425-4620-98f5-8a6ea6dce405","Type":"ContainerDied","Data":"e008405b46c42a7fdfb31d3b35fa53f40c7098db97e1e6146ebd3aa20f18820e"} Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.272645 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.438787 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2pcw\" (UniqueName: \"kubernetes.io/projected/5f20105a-5425-4620-98f5-8a6ea6dce405-kube-api-access-c2pcw\") pod \"5f20105a-5425-4620-98f5-8a6ea6dce405\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.438976 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-bundle\") pod \"5f20105a-5425-4620-98f5-8a6ea6dce405\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.439028 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-util\") pod \"5f20105a-5425-4620-98f5-8a6ea6dce405\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.442993 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-bundle" (OuterVolumeSpecName: "bundle") pod "5f20105a-5425-4620-98f5-8a6ea6dce405" (UID: "5f20105a-5425-4620-98f5-8a6ea6dce405"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.443837 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f20105a-5425-4620-98f5-8a6ea6dce405-kube-api-access-c2pcw" (OuterVolumeSpecName: "kube-api-access-c2pcw") pod "5f20105a-5425-4620-98f5-8a6ea6dce405" (UID: "5f20105a-5425-4620-98f5-8a6ea6dce405"). InnerVolumeSpecName "kube-api-access-c2pcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.452946 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-util" (OuterVolumeSpecName: "util") pod "5f20105a-5425-4620-98f5-8a6ea6dce405" (UID: "5f20105a-5425-4620-98f5-8a6ea6dce405"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.540686 4932 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.540729 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2pcw\" (UniqueName: \"kubernetes.io/projected/5f20105a-5425-4620-98f5-8a6ea6dce405-kube-api-access-c2pcw\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.540772 4932 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.995546 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" event={"ID":"5f20105a-5425-4620-98f5-8a6ea6dce405","Type":"ContainerDied","Data":"8b8444fb77144c09600fe22cfad18194e50dce16534f6a1667c73cf801c08376"} Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.995610 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b8444fb77144c09600fe22cfad18194e50dce16534f6a1667c73cf801c08376" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.995711 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:57 crc kubenswrapper[4932]: I0218 19:45:57.606221 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:45:57 crc kubenswrapper[4932]: I0218 19:45:57.606701 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.436350 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5"] Feb 18 19:46:06 crc kubenswrapper[4932]: E0218 19:46:06.437109 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerName="util" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.437125 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerName="util" Feb 18 19:46:06 crc kubenswrapper[4932]: E0218 19:46:06.437137 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerName="extract" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.437144 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerName="extract" Feb 18 19:46:06 crc kubenswrapper[4932]: E0218 19:46:06.437219 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerName="pull" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.437229 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerName="pull" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.437345 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerName="extract" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.437730 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.439767 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.439996 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-zktvq" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.440113 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.440295 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.539192 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.539818 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.542934 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.543097 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-69htb" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.553072 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.553759 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.567769 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.581329 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.593852 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2tft\" (UniqueName: \"kubernetes.io/projected/1d614362-98da-46f5-8874-c5afbd3fa2b8-kube-api-access-r2tft\") pod \"obo-prometheus-operator-68bc856cb9-nsqq5\" (UID: \"1d614362-98da-46f5-8874-c5afbd3fa2b8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.695860 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2tft\" (UniqueName: \"kubernetes.io/projected/1d614362-98da-46f5-8874-c5afbd3fa2b8-kube-api-access-r2tft\") pod \"obo-prometheus-operator-68bc856cb9-nsqq5\" (UID: \"1d614362-98da-46f5-8874-c5afbd3fa2b8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.696556 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95581019-de0d-4172-9b8a-765b66064517-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-4qc95\" (UID: \"95581019-de0d-4172-9b8a-765b66064517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.696679 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f73debe-8d66-454d-84aa-1559f284bfe0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-dswf7\" (UID: \"0f73debe-8d66-454d-84aa-1559f284bfe0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.698580 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f73debe-8d66-454d-84aa-1559f284bfe0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-dswf7\" (UID: \"0f73debe-8d66-454d-84aa-1559f284bfe0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.699592 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95581019-de0d-4172-9b8a-765b66064517-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-4qc95\" (UID: \"95581019-de0d-4172-9b8a-765b66064517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.723157 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2tft\" (UniqueName: \"kubernetes.io/projected/1d614362-98da-46f5-8874-c5afbd3fa2b8-kube-api-access-r2tft\") pod \"obo-prometheus-operator-68bc856cb9-nsqq5\" (UID: \"1d614362-98da-46f5-8874-c5afbd3fa2b8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.743571 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-gk97g"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.744223 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.747845 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-6hblk" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.748148 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.753063 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.766868 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-gk97g"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.802310 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95581019-de0d-4172-9b8a-765b66064517-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-4qc95\" (UID: \"95581019-de0d-4172-9b8a-765b66064517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.802351 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f73debe-8d66-454d-84aa-1559f284bfe0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-dswf7\" (UID: \"0f73debe-8d66-454d-84aa-1559f284bfe0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.802386 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f73debe-8d66-454d-84aa-1559f284bfe0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-dswf7\" (UID: \"0f73debe-8d66-454d-84aa-1559f284bfe0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.802424 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95581019-de0d-4172-9b8a-765b66064517-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-4qc95\" (UID: \"95581019-de0d-4172-9b8a-765b66064517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.807818 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95581019-de0d-4172-9b8a-765b66064517-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-4qc95\" (UID: \"95581019-de0d-4172-9b8a-765b66064517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.807872 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f73debe-8d66-454d-84aa-1559f284bfe0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-dswf7\" (UID: \"0f73debe-8d66-454d-84aa-1559f284bfe0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.808496 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95581019-de0d-4172-9b8a-765b66064517-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-4qc95\" (UID: \"95581019-de0d-4172-9b8a-765b66064517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.814961 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f73debe-8d66-454d-84aa-1559f284bfe0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-dswf7\" (UID: \"0f73debe-8d66-454d-84aa-1559f284bfe0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.850559 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jnl27"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.851406 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.853260 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-29dxh" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.856758 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.860668 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jnl27"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.873614 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.904207 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlxdr\" (UniqueName: \"kubernetes.io/projected/aef7f5d0-1875-434a-a818-cc3c9e633fd2-kube-api-access-vlxdr\") pod \"observability-operator-59bdc8b94-gk97g\" (UID: \"aef7f5d0-1875-434a-a818-cc3c9e633fd2\") " pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.904493 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/aef7f5d0-1875-434a-a818-cc3c9e633fd2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-gk97g\" (UID: \"aef7f5d0-1875-434a-a818-cc3c9e633fd2\") " pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.006078 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6c85304-6fdd-4763-90cb-5a1f61318fd9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jnl27\" (UID: \"d6c85304-6fdd-4763-90cb-5a1f61318fd9\") " pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.006135 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlxdr\" (UniqueName: \"kubernetes.io/projected/aef7f5d0-1875-434a-a818-cc3c9e633fd2-kube-api-access-vlxdr\") pod \"observability-operator-59bdc8b94-gk97g\" (UID: \"aef7f5d0-1875-434a-a818-cc3c9e633fd2\") " pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.006159 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/aef7f5d0-1875-434a-a818-cc3c9e633fd2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-gk97g\" (UID: \"aef7f5d0-1875-434a-a818-cc3c9e633fd2\") " pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.006201 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9fzg\" (UniqueName: \"kubernetes.io/projected/d6c85304-6fdd-4763-90cb-5a1f61318fd9-kube-api-access-v9fzg\") pod \"perses-operator-5bf474d74f-jnl27\" (UID: \"d6c85304-6fdd-4763-90cb-5a1f61318fd9\") " pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.010907 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/aef7f5d0-1875-434a-a818-cc3c9e633fd2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-gk97g\" (UID: \"aef7f5d0-1875-434a-a818-cc3c9e633fd2\") " pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.020475 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlxdr\" (UniqueName: \"kubernetes.io/projected/aef7f5d0-1875-434a-a818-cc3c9e633fd2-kube-api-access-vlxdr\") pod \"observability-operator-59bdc8b94-gk97g\" (UID: \"aef7f5d0-1875-434a-a818-cc3c9e633fd2\") " pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.067197 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.087561 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5"] Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.107627 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9fzg\" (UniqueName: \"kubernetes.io/projected/d6c85304-6fdd-4763-90cb-5a1f61318fd9-kube-api-access-v9fzg\") pod \"perses-operator-5bf474d74f-jnl27\" (UID: \"d6c85304-6fdd-4763-90cb-5a1f61318fd9\") " pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.107728 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6c85304-6fdd-4763-90cb-5a1f61318fd9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jnl27\" (UID: \"d6c85304-6fdd-4763-90cb-5a1f61318fd9\") " pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.108615 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6c85304-6fdd-4763-90cb-5a1f61318fd9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jnl27\" (UID: \"d6c85304-6fdd-4763-90cb-5a1f61318fd9\") " pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.136142 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9fzg\" (UniqueName: \"kubernetes.io/projected/d6c85304-6fdd-4763-90cb-5a1f61318fd9-kube-api-access-v9fzg\") pod \"perses-operator-5bf474d74f-jnl27\" (UID: \"d6c85304-6fdd-4763-90cb-5a1f61318fd9\") " pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.172077 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.208320 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95"] Feb 18 19:46:07 crc kubenswrapper[4932]: W0218 19:46:07.210311 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95581019_de0d_4172_9b8a_765b66064517.slice/crio-32347782cb024591c5b4b56416c524f63377b533bfb9cfeb7f0afbbbf48a574c WatchSource:0}: Error finding container 32347782cb024591c5b4b56416c524f63377b533bfb9cfeb7f0afbbbf48a574c: Status 404 returned error can't find the container with id 32347782cb024591c5b4b56416c524f63377b533bfb9cfeb7f0afbbbf48a574c Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.222396 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7"] Feb 18 19:46:07 crc kubenswrapper[4932]: W0218 19:46:07.228972 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f73debe_8d66_454d_84aa_1559f284bfe0.slice/crio-16ad7e149449f170c6cadf24108d91741fe7d164125b320fb4afd8779629d7a2 WatchSource:0}: Error finding container 16ad7e149449f170c6cadf24108d91741fe7d164125b320fb4afd8779629d7a2: Status 404 returned error can't find the container with id 16ad7e149449f170c6cadf24108d91741fe7d164125b320fb4afd8779629d7a2 Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.410607 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jnl27"] Feb 18 19:46:07 crc kubenswrapper[4932]: W0218 19:46:07.417015 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6c85304_6fdd_4763_90cb_5a1f61318fd9.slice/crio-f6aa05cfeaa10b2511f9b20816c707c6711f2e99d4f77a95de0de7903ffa1d8d WatchSource:0}: Error finding container f6aa05cfeaa10b2511f9b20816c707c6711f2e99d4f77a95de0de7903ffa1d8d: Status 404 returned error can't find the container with id f6aa05cfeaa10b2511f9b20816c707c6711f2e99d4f77a95de0de7903ffa1d8d Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.536277 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-gk97g"] Feb 18 19:46:07 crc kubenswrapper[4932]: W0218 19:46:07.544438 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaef7f5d0_1875_434a_a818_cc3c9e633fd2.slice/crio-d461a94c40f2179fe647346f134811d33bb4912dbf1fc116282065326ca2693b WatchSource:0}: Error finding container d461a94c40f2179fe647346f134811d33bb4912dbf1fc116282065326ca2693b: Status 404 returned error can't find the container with id d461a94c40f2179fe647346f134811d33bb4912dbf1fc116282065326ca2693b Feb 18 19:46:08 crc kubenswrapper[4932]: I0218 19:46:08.092317 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" event={"ID":"95581019-de0d-4172-9b8a-765b66064517","Type":"ContainerStarted","Data":"32347782cb024591c5b4b56416c524f63377b533bfb9cfeb7f0afbbbf48a574c"} Feb 18 19:46:08 crc kubenswrapper[4932]: I0218 19:46:08.093361 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" event={"ID":"0f73debe-8d66-454d-84aa-1559f284bfe0","Type":"ContainerStarted","Data":"16ad7e149449f170c6cadf24108d91741fe7d164125b320fb4afd8779629d7a2"} Feb 18 19:46:08 crc kubenswrapper[4932]: I0218 19:46:08.094306 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" event={"ID":"1d614362-98da-46f5-8874-c5afbd3fa2b8","Type":"ContainerStarted","Data":"c565d0b24f3b065325fba0acef142b3e4ba0b505100c5e470aac7e1e822c0c6e"} Feb 18 19:46:08 crc kubenswrapper[4932]: I0218 19:46:08.095011 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-jnl27" event={"ID":"d6c85304-6fdd-4763-90cb-5a1f61318fd9","Type":"ContainerStarted","Data":"f6aa05cfeaa10b2511f9b20816c707c6711f2e99d4f77a95de0de7903ffa1d8d"} Feb 18 19:46:08 crc kubenswrapper[4932]: I0218 19:46:08.095788 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" event={"ID":"aef7f5d0-1875-434a-a818-cc3c9e633fd2","Type":"ContainerStarted","Data":"d461a94c40f2179fe647346f134811d33bb4912dbf1fc116282065326ca2693b"} Feb 18 19:46:23 crc kubenswrapper[4932]: E0218 19:46:23.472520 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" Feb 18 19:46:23 crc kubenswrapper[4932]: E0218 19:46:23.473214 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c,Command:[],Args:[--namespace=$(NAMESPACE) --images=perses=$(RELATED_IMAGE_PERSES) --images=alertmanager=$(RELATED_IMAGE_ALERTMANAGER) --images=prometheus=$(RELATED_IMAGE_PROMETHEUS) --images=thanos=$(RELATED_IMAGE_THANOS) --images=ui-dashboards=$(RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN) --images=ui-distributed-tracing=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN) --images=ui-distributed-tracing-pf5=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5) --images=ui-distributed-tracing-pf4=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4) --images=ui-logging=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN) --images=ui-logging-pf4=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4) --images=ui-troubleshooting-panel=$(RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN) --images=ui-monitoring=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN) --images=ui-monitoring-pf5=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5) --images=korrel8r=$(RELATED_IMAGE_KORREL8R) --images=health-analyzer=$(RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER) --openshift.enabled=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER,Value:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS,Value:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_THANOS,Value:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PERSES,Value:registry.redhat.io/cluster-observability-operator/perses-rhel9@sha256:e797cdb47beef40b04da7b6d645bca3dc32e6247003c45b56b38efd9e13bf01c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-rhel9@sha256:7d662a120305e2528acc7e9142b770b5b6a7f4932ddfcadfa4ac953935124895,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf5-rhel9@sha256:75465aabb0aa427a5c531a8fcde463f6d119afbcc618ebcbf6b7ee9bc8aad160,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf4-rhel9@sha256:dc18c8d6a4a9a0a574a57cc5082c8a9b26023bd6d69b9732892d584c1dfe5070,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-rhel9@sha256:369729978cecdc13c99ef3d179f8eb8a450a4a0cb70b63c27a55a15d1710ba27,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-pf4-rhel9@sha256:d8c7a61d147f62b204d5c5f16864386025393453c9a81ea327bbd25d7765d611,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/troubleshooting-panel-console-plugin-rhel9@sha256:b4a6eb1cc118a4334b424614959d8b7f361ddd779b3a72690ca49b0a3f26d9b8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-rhel9@sha256:21d4fff670893ba4b7fbc528cd49f8b71c8281cede9ef84f0697065bb6a7fc50,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-pf5-rhel9@sha256:12d9dbe297a1c3b9df671f21156992082bc483887d851fafe76e5d17321ff474,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KORREL8R,Value:registry.redhat.io/cluster-observability-operator/korrel8r-rhel9@sha256:e65c37f04f6d76a0cbfe05edb3cddf6a8f14f859ee35cf3aebea8fcb991d2c19,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER,Value:registry.redhat.io/cluster-observability-operator/cluster-health-analyzer-rhel9@sha256:48e4e178c6eeaa9d5dd77a591c185a311b4b4a5caadb7199d48463123e31dc9e,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{400 -3} {} 400m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:observability-operator-tls,ReadOnly:true,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlxdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-operator-59bdc8b94-gk97g_openshift-operators(aef7f5d0-1875-434a-a818-cc3c9e633fd2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:46:23 crc kubenswrapper[4932]: E0218 19:46:23.474561 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" podUID="aef7f5d0-1875-434a-a818-cc3c9e633fd2" Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.214436 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" event={"ID":"1d614362-98da-46f5-8874-c5afbd3fa2b8","Type":"ContainerStarted","Data":"1beee27f9333a46f30214fd7417c40bf44e5f148744ce3947732abbf222044d7"} Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.220696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-jnl27" event={"ID":"d6c85304-6fdd-4763-90cb-5a1f61318fd9","Type":"ContainerStarted","Data":"e4210edaee16a6f0ff19c833afffd3fd3218cd2b660956ef5d2e461dedff7d7f"} Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.220823 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.222730 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" event={"ID":"95581019-de0d-4172-9b8a-765b66064517","Type":"ContainerStarted","Data":"9886150bbae0e3e1b23ad3029dfb5ba511369eebf9bae89e9fe9f27a0f5e5110"} Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.224023 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" event={"ID":"0f73debe-8d66-454d-84aa-1559f284bfe0","Type":"ContainerStarted","Data":"b0b36dbceac1e72b55713caad25471d2e55dae7c0b661105d9033e42a9f6d8a8"} Feb 18 19:46:24 crc kubenswrapper[4932]: E0218 19:46:24.225508 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c\\\"\"" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" podUID="aef7f5d0-1875-434a-a818-cc3c9e633fd2" Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.232107 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" podStartSLOduration=1.8355237 podStartE2EDuration="18.232093107s" podCreationTimestamp="2026-02-18 19:46:06 +0000 UTC" firstStartedPulling="2026-02-18 19:46:07.11989181 +0000 UTC m=+730.701846655" lastFinishedPulling="2026-02-18 19:46:23.516461207 +0000 UTC m=+747.098416062" observedRunningTime="2026-02-18 19:46:24.230922108 +0000 UTC m=+747.812876953" watchObservedRunningTime="2026-02-18 19:46:24.232093107 +0000 UTC m=+747.814047952" Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.254852 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" podStartSLOduration=1.95211917 podStartE2EDuration="18.254834828s" podCreationTimestamp="2026-02-18 19:46:06 +0000 UTC" firstStartedPulling="2026-02-18 19:46:07.213103173 +0000 UTC m=+730.795058028" lastFinishedPulling="2026-02-18 19:46:23.515818801 +0000 UTC m=+747.097773686" observedRunningTime="2026-02-18 19:46:24.25247826 +0000 UTC m=+747.834433105" watchObservedRunningTime="2026-02-18 19:46:24.254834828 +0000 UTC m=+747.836789673" Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.283372 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" podStartSLOduration=1.986008778 podStartE2EDuration="18.283358923s" podCreationTimestamp="2026-02-18 19:46:06 +0000 UTC" firstStartedPulling="2026-02-18 19:46:07.234284287 +0000 UTC m=+730.816239132" lastFinishedPulling="2026-02-18 19:46:23.531634422 +0000 UTC m=+747.113589277" observedRunningTime="2026-02-18 19:46:24.281815985 +0000 UTC m=+747.863770830" watchObservedRunningTime="2026-02-18 19:46:24.283358923 +0000 UTC m=+747.865313768" Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.339568 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-jnl27" podStartSLOduration=2.242927495 podStartE2EDuration="18.339552581s" podCreationTimestamp="2026-02-18 19:46:06 +0000 UTC" firstStartedPulling="2026-02-18 19:46:07.418773505 +0000 UTC m=+731.000728350" lastFinishedPulling="2026-02-18 19:46:23.515398551 +0000 UTC m=+747.097353436" observedRunningTime="2026-02-18 19:46:24.339006468 +0000 UTC m=+747.920961313" watchObservedRunningTime="2026-02-18 19:46:24.339552581 +0000 UTC m=+747.921507426" Feb 18 19:46:27 crc kubenswrapper[4932]: I0218 19:46:27.606460 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:46:27 crc kubenswrapper[4932]: I0218 19:46:27.606714 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:46:30 crc kubenswrapper[4932]: I0218 19:46:30.767254 4932 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:46:37 crc kubenswrapper[4932]: I0218 19:46:37.174923 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:38 crc kubenswrapper[4932]: I0218 19:46:38.327343 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" event={"ID":"aef7f5d0-1875-434a-a818-cc3c9e633fd2","Type":"ContainerStarted","Data":"e348e292221ea402acd6c97879f83914f45375b5d839485b5ac139ae935d29bb"} Feb 18 19:46:38 crc kubenswrapper[4932]: I0218 19:46:38.327943 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:38 crc kubenswrapper[4932]: I0218 19:46:38.330256 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:38 crc kubenswrapper[4932]: I0218 19:46:38.375966 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" podStartSLOduration=2.521598531 podStartE2EDuration="32.375946808s" podCreationTimestamp="2026-02-18 19:46:06 +0000 UTC" firstStartedPulling="2026-02-18 19:46:07.550491489 +0000 UTC m=+731.132446334" lastFinishedPulling="2026-02-18 19:46:37.404839746 +0000 UTC m=+760.986794611" observedRunningTime="2026-02-18 19:46:38.349555906 +0000 UTC m=+761.931510751" watchObservedRunningTime="2026-02-18 19:46:38.375946808 +0000 UTC m=+761.957901653" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.491217 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-khmxv"] Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.493060 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.502577 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khmxv"] Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.507543 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrxgd\" (UniqueName: \"kubernetes.io/projected/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-kube-api-access-vrxgd\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.507614 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-catalog-content\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.507706 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-utilities\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.609661 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-catalog-content\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.609792 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrxgd\" (UniqueName: \"kubernetes.io/projected/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-kube-api-access-vrxgd\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.609885 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-utilities\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.610865 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-utilities\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.611028 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-catalog-content\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.636692 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrxgd\" (UniqueName: \"kubernetes.io/projected/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-kube-api-access-vrxgd\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.811876 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:54 crc kubenswrapper[4932]: I0218 19:46:54.249018 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khmxv"] Feb 18 19:46:54 crc kubenswrapper[4932]: I0218 19:46:54.422486 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerStarted","Data":"4e27e9c63915178a1013148fd8c27d21f8d6ff07ebec24244d082f831b9b799a"} Feb 18 19:46:54 crc kubenswrapper[4932]: I0218 19:46:54.422526 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerStarted","Data":"d2d248bc96ff686741bc9d18c111e029f66e7b55a12534b4aee09014a335d602"} Feb 18 19:46:55 crc kubenswrapper[4932]: I0218 19:46:55.430204 4932 generic.go:334] "Generic (PLEG): container finished" podID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerID="4e27e9c63915178a1013148fd8c27d21f8d6ff07ebec24244d082f831b9b799a" exitCode=0 Feb 18 19:46:55 crc kubenswrapper[4932]: I0218 19:46:55.430373 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerDied","Data":"4e27e9c63915178a1013148fd8c27d21f8d6ff07ebec24244d082f831b9b799a"} Feb 18 19:46:55 crc kubenswrapper[4932]: I0218 19:46:55.430529 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerStarted","Data":"e88a66981dc355681048566540ba21d2b80b46a376a5c1da04f5047ddb9643ed"} Feb 18 19:46:55 crc kubenswrapper[4932]: I0218 19:46:55.902057 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b"] Feb 18 19:46:55 crc kubenswrapper[4932]: I0218 19:46:55.903835 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:55 crc kubenswrapper[4932]: I0218 19:46:55.906482 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:46:55 crc kubenswrapper[4932]: I0218 19:46:55.917741 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b"] Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.038474 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.038617 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwr4n\" (UniqueName: \"kubernetes.io/projected/0698d2a5-118e-4c2b-8325-875aab6bdc97-kube-api-access-jwr4n\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.038723 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.139582 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.139774 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.139849 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwr4n\" (UniqueName: \"kubernetes.io/projected/0698d2a5-118e-4c2b-8325-875aab6bdc97-kube-api-access-jwr4n\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.140132 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.140236 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.168212 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwr4n\" (UniqueName: \"kubernetes.io/projected/0698d2a5-118e-4c2b-8325-875aab6bdc97-kube-api-access-jwr4n\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.219090 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.395562 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b"] Feb 18 19:46:56 crc kubenswrapper[4932]: W0218 19:46:56.399783 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0698d2a5_118e_4c2b_8325_875aab6bdc97.slice/crio-14e8d6b9c5e954c4f70f81d07cfbf0c8db1088da3b9a2ce650a28b6a0a97a38e WatchSource:0}: Error finding container 14e8d6b9c5e954c4f70f81d07cfbf0c8db1088da3b9a2ce650a28b6a0a97a38e: Status 404 returned error can't find the container with id 14e8d6b9c5e954c4f70f81d07cfbf0c8db1088da3b9a2ce650a28b6a0a97a38e Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.437072 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" event={"ID":"0698d2a5-118e-4c2b-8325-875aab6bdc97","Type":"ContainerStarted","Data":"14e8d6b9c5e954c4f70f81d07cfbf0c8db1088da3b9a2ce650a28b6a0a97a38e"} Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.439151 4932 generic.go:334] "Generic (PLEG): container finished" podID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerID="e88a66981dc355681048566540ba21d2b80b46a376a5c1da04f5047ddb9643ed" exitCode=0 Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.439200 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerDied","Data":"e88a66981dc355681048566540ba21d2b80b46a376a5c1da04f5047ddb9643ed"} Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.447027 4932 generic.go:334] "Generic (PLEG): container finished" podID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerID="79642c89b2ecf4466d565ea49dfa91f23064e31795a11ae1577cb45f3de77ea8" exitCode=0 Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.447073 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" event={"ID":"0698d2a5-118e-4c2b-8325-875aab6bdc97","Type":"ContainerDied","Data":"79642c89b2ecf4466d565ea49dfa91f23064e31795a11ae1577cb45f3de77ea8"} Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.453366 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerStarted","Data":"07618326fa18de234030563d641197cae3e4d2f25e999e02de6190d757598673"} Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.503685 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-khmxv" podStartSLOduration=2.093577272 podStartE2EDuration="4.503664587s" podCreationTimestamp="2026-02-18 19:46:53 +0000 UTC" firstStartedPulling="2026-02-18 19:46:54.424649639 +0000 UTC m=+778.006604484" lastFinishedPulling="2026-02-18 19:46:56.834736924 +0000 UTC m=+780.416691799" observedRunningTime="2026-02-18 19:46:57.497925355 +0000 UTC m=+781.079880210" watchObservedRunningTime="2026-02-18 19:46:57.503664587 +0000 UTC m=+781.085619442" Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.606417 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.606476 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.606520 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.607148 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3b543e6ec63bdf78c858f95870e024438d65d986dd0f72b674fc74756af06be"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.607269 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://f3b543e6ec63bdf78c858f95870e024438d65d986dd0f72b674fc74756af06be" gracePeriod=600 Feb 18 19:46:58 crc kubenswrapper[4932]: I0218 19:46:58.462869 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="f3b543e6ec63bdf78c858f95870e024438d65d986dd0f72b674fc74756af06be" exitCode=0 Feb 18 19:46:58 crc kubenswrapper[4932]: I0218 19:46:58.462940 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"f3b543e6ec63bdf78c858f95870e024438d65d986dd0f72b674fc74756af06be"} Feb 18 19:46:58 crc kubenswrapper[4932]: I0218 19:46:58.463299 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"0796b82991176676a1533452d61ed93202733b7f85192cab295504d343f7c992"} Feb 18 19:46:58 crc kubenswrapper[4932]: I0218 19:46:58.463331 4932 scope.go:117] "RemoveContainer" containerID="4ae81c5d1f59105a46f72ab1a12573d6a2070dbba30970140847e1ed2a0ce08d" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.443287 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-brmkq"] Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.446308 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.454465 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brmkq"] Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.492298 4932 generic.go:334] "Generic (PLEG): container finished" podID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerID="74eb47cb548078d77d4e3d87c2d001faf8bfea77fb446fed977ebb6f7bac086a" exitCode=0 Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.492355 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" event={"ID":"0698d2a5-118e-4c2b-8325-875aab6bdc97","Type":"ContainerDied","Data":"74eb47cb548078d77d4e3d87c2d001faf8bfea77fb446fed977ebb6f7bac086a"} Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.584518 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-catalog-content\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.584659 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn2n2\" (UniqueName: \"kubernetes.io/projected/48955357-bac8-4bc1-80f8-939c59861c52-kube-api-access-kn2n2\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.584698 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-utilities\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.685277 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn2n2\" (UniqueName: \"kubernetes.io/projected/48955357-bac8-4bc1-80f8-939c59861c52-kube-api-access-kn2n2\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.685323 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-utilities\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.685380 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-catalog-content\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.685823 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-catalog-content\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.685950 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-utilities\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.706840 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn2n2\" (UniqueName: \"kubernetes.io/projected/48955357-bac8-4bc1-80f8-939c59861c52-kube-api-access-kn2n2\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.769205 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.983075 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brmkq"] Feb 18 19:47:00 crc kubenswrapper[4932]: I0218 19:47:00.506324 4932 generic.go:334] "Generic (PLEG): container finished" podID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerID="f63cbdb6097a0bca0c288a3476870c08a7ee4283f8925a22c928513fd0acda40" exitCode=0 Feb 18 19:47:00 crc kubenswrapper[4932]: I0218 19:47:00.506426 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" event={"ID":"0698d2a5-118e-4c2b-8325-875aab6bdc97","Type":"ContainerDied","Data":"f63cbdb6097a0bca0c288a3476870c08a7ee4283f8925a22c928513fd0acda40"} Feb 18 19:47:00 crc kubenswrapper[4932]: I0218 19:47:00.507925 4932 generic.go:334] "Generic (PLEG): container finished" podID="48955357-bac8-4bc1-80f8-939c59861c52" containerID="297851fa0e23edca20383a70a1a308b0693ebe352c76ce53a3ade9506c01c89a" exitCode=0 Feb 18 19:47:00 crc kubenswrapper[4932]: I0218 19:47:00.507955 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brmkq" event={"ID":"48955357-bac8-4bc1-80f8-939c59861c52","Type":"ContainerDied","Data":"297851fa0e23edca20383a70a1a308b0693ebe352c76ce53a3ade9506c01c89a"} Feb 18 19:47:00 crc kubenswrapper[4932]: I0218 19:47:00.507974 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brmkq" event={"ID":"48955357-bac8-4bc1-80f8-939c59861c52","Type":"ContainerStarted","Data":"d674cd5ee7e951ca323d8c8395fb5893fb353533a055e7722bd24a4a5c045733"} Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.514134 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brmkq" event={"ID":"48955357-bac8-4bc1-80f8-939c59861c52","Type":"ContainerStarted","Data":"da0114a0a1163416b252518f5f7b35cd4051a8a02afdb5d42f047eeb519dcee4"} Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.777323 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.820752 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-bundle\") pod \"0698d2a5-118e-4c2b-8325-875aab6bdc97\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.821408 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwr4n\" (UniqueName: \"kubernetes.io/projected/0698d2a5-118e-4c2b-8325-875aab6bdc97-kube-api-access-jwr4n\") pod \"0698d2a5-118e-4c2b-8325-875aab6bdc97\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.821568 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-util\") pod \"0698d2a5-118e-4c2b-8325-875aab6bdc97\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.825868 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-bundle" (OuterVolumeSpecName: "bundle") pod "0698d2a5-118e-4c2b-8325-875aab6bdc97" (UID: "0698d2a5-118e-4c2b-8325-875aab6bdc97"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.832475 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0698d2a5-118e-4c2b-8325-875aab6bdc97-kube-api-access-jwr4n" (OuterVolumeSpecName: "kube-api-access-jwr4n") pod "0698d2a5-118e-4c2b-8325-875aab6bdc97" (UID: "0698d2a5-118e-4c2b-8325-875aab6bdc97"). InnerVolumeSpecName "kube-api-access-jwr4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.835106 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-util" (OuterVolumeSpecName: "util") pod "0698d2a5-118e-4c2b-8325-875aab6bdc97" (UID: "0698d2a5-118e-4c2b-8325-875aab6bdc97"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.923635 4932 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.923860 4932 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.923937 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwr4n\" (UniqueName: \"kubernetes.io/projected/0698d2a5-118e-4c2b-8325-875aab6bdc97-kube-api-access-jwr4n\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:02 crc kubenswrapper[4932]: I0218 19:47:02.527297 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" event={"ID":"0698d2a5-118e-4c2b-8325-875aab6bdc97","Type":"ContainerDied","Data":"14e8d6b9c5e954c4f70f81d07cfbf0c8db1088da3b9a2ce650a28b6a0a97a38e"} Feb 18 19:47:02 crc kubenswrapper[4932]: I0218 19:47:02.527612 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14e8d6b9c5e954c4f70f81d07cfbf0c8db1088da3b9a2ce650a28b6a0a97a38e" Feb 18 19:47:02 crc kubenswrapper[4932]: I0218 19:47:02.527748 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:47:02 crc kubenswrapper[4932]: I0218 19:47:02.530872 4932 generic.go:334] "Generic (PLEG): container finished" podID="48955357-bac8-4bc1-80f8-939c59861c52" containerID="da0114a0a1163416b252518f5f7b35cd4051a8a02afdb5d42f047eeb519dcee4" exitCode=0 Feb 18 19:47:02 crc kubenswrapper[4932]: I0218 19:47:02.530968 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brmkq" event={"ID":"48955357-bac8-4bc1-80f8-939c59861c52","Type":"ContainerDied","Data":"da0114a0a1163416b252518f5f7b35cd4051a8a02afdb5d42f047eeb519dcee4"} Feb 18 19:47:03 crc kubenswrapper[4932]: I0218 19:47:03.543116 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brmkq" event={"ID":"48955357-bac8-4bc1-80f8-939c59861c52","Type":"ContainerStarted","Data":"06c72665858ea405b5bdd8f16cf7d3063d79f619bef00c595414e4690a8411ee"} Feb 18 19:47:03 crc kubenswrapper[4932]: I0218 19:47:03.569284 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-brmkq" podStartSLOduration=2.11760456 podStartE2EDuration="4.569268793s" podCreationTimestamp="2026-02-18 19:46:59 +0000 UTC" firstStartedPulling="2026-02-18 19:47:00.509474138 +0000 UTC m=+784.091428983" lastFinishedPulling="2026-02-18 19:47:02.961138331 +0000 UTC m=+786.543093216" observedRunningTime="2026-02-18 19:47:03.568626297 +0000 UTC m=+787.150581182" watchObservedRunningTime="2026-02-18 19:47:03.569268793 +0000 UTC m=+787.151223638" Feb 18 19:47:03 crc kubenswrapper[4932]: I0218 19:47:03.813095 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:47:03 crc kubenswrapper[4932]: I0218 19:47:03.813195 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:47:03 crc kubenswrapper[4932]: I0218 19:47:03.888263 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.615147 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.771947 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-rlwc2"] Feb 18 19:47:04 crc kubenswrapper[4932]: E0218 19:47:04.772216 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerName="extract" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.772235 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerName="extract" Feb 18 19:47:04 crc kubenswrapper[4932]: E0218 19:47:04.772250 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerName="util" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.772258 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerName="util" Feb 18 19:47:04 crc kubenswrapper[4932]: E0218 19:47:04.772271 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerName="pull" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.772279 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerName="pull" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.772413 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerName="extract" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.772876 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.774503 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-h4lf6" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.775311 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.775849 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.780887 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-rlwc2"] Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.963149 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrnpt\" (UniqueName: \"kubernetes.io/projected/077cce1b-0169-481c-adf2-3d0536d1c943-kube-api-access-xrnpt\") pod \"nmstate-operator-694c9596b7-rlwc2\" (UID: \"077cce1b-0169-481c-adf2-3d0536d1c943\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" Feb 18 19:47:05 crc kubenswrapper[4932]: I0218 19:47:05.064254 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrnpt\" (UniqueName: \"kubernetes.io/projected/077cce1b-0169-481c-adf2-3d0536d1c943-kube-api-access-xrnpt\") pod \"nmstate-operator-694c9596b7-rlwc2\" (UID: \"077cce1b-0169-481c-adf2-3d0536d1c943\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" Feb 18 19:47:05 crc kubenswrapper[4932]: I0218 19:47:05.097362 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrnpt\" (UniqueName: \"kubernetes.io/projected/077cce1b-0169-481c-adf2-3d0536d1c943-kube-api-access-xrnpt\") pod \"nmstate-operator-694c9596b7-rlwc2\" (UID: \"077cce1b-0169-481c-adf2-3d0536d1c943\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" Feb 18 19:47:05 crc kubenswrapper[4932]: I0218 19:47:05.391688 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" Feb 18 19:47:05 crc kubenswrapper[4932]: I0218 19:47:05.595017 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-rlwc2"] Feb 18 19:47:06 crc kubenswrapper[4932]: I0218 19:47:06.567368 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" event={"ID":"077cce1b-0169-481c-adf2-3d0536d1c943","Type":"ContainerStarted","Data":"9036f1f8edf5dcf4d85cbb3331dfdb47f771b2dc1ad3908d80b077b1fd5a733d"} Feb 18 19:47:07 crc kubenswrapper[4932]: I0218 19:47:07.643311 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khmxv"] Feb 18 19:47:07 crc kubenswrapper[4932]: I0218 19:47:07.643865 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-khmxv" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="registry-server" containerID="cri-o://07618326fa18de234030563d641197cae3e4d2f25e999e02de6190d757598673" gracePeriod=2 Feb 18 19:47:09 crc kubenswrapper[4932]: I0218 19:47:09.769606 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:47:09 crc kubenswrapper[4932]: I0218 19:47:09.769959 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:47:09 crc kubenswrapper[4932]: I0218 19:47:09.839010 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.601056 4932 generic.go:334] "Generic (PLEG): container finished" podID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerID="07618326fa18de234030563d641197cae3e4d2f25e999e02de6190d757598673" exitCode=0 Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.601860 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerDied","Data":"07618326fa18de234030563d641197cae3e4d2f25e999e02de6190d757598673"} Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.644016 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.785252 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.843263 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-utilities\") pod \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.843342 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-catalog-content\") pod \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.843373 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrxgd\" (UniqueName: \"kubernetes.io/projected/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-kube-api-access-vrxgd\") pod \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.844115 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-utilities" (OuterVolumeSpecName: "utilities") pod "a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" (UID: "a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.852300 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-kube-api-access-vrxgd" (OuterVolumeSpecName: "kube-api-access-vrxgd") pod "a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" (UID: "a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6"). InnerVolumeSpecName "kube-api-access-vrxgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.895025 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" (UID: "a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.945227 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.945256 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.945267 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrxgd\" (UniqueName: \"kubernetes.io/projected/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-kube-api-access-vrxgd\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.609697 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" event={"ID":"077cce1b-0169-481c-adf2-3d0536d1c943","Type":"ContainerStarted","Data":"255b98631482fd40b1d833f4ee662645c35802d56c36c72ec267be78709aa9ec"} Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.613252 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.613825 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerDied","Data":"d2d248bc96ff686741bc9d18c111e029f66e7b55a12534b4aee09014a335d602"} Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.613865 4932 scope.go:117] "RemoveContainer" containerID="07618326fa18de234030563d641197cae3e4d2f25e999e02de6190d757598673" Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.632210 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" podStartSLOduration=1.873537918 podStartE2EDuration="7.632195151s" podCreationTimestamp="2026-02-18 19:47:04 +0000 UTC" firstStartedPulling="2026-02-18 19:47:05.604275775 +0000 UTC m=+789.186230620" lastFinishedPulling="2026-02-18 19:47:11.362932998 +0000 UTC m=+794.944887853" observedRunningTime="2026-02-18 19:47:11.631170216 +0000 UTC m=+795.213125071" watchObservedRunningTime="2026-02-18 19:47:11.632195151 +0000 UTC m=+795.214149996" Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.645547 4932 scope.go:117] "RemoveContainer" containerID="e88a66981dc355681048566540ba21d2b80b46a376a5c1da04f5047ddb9643ed" Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.648589 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khmxv"] Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.652579 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-khmxv"] Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.667323 4932 scope.go:117] "RemoveContainer" containerID="4e27e9c63915178a1013148fd8c27d21f8d6ff07ebec24244d082f831b9b799a" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.641921 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-mtshd"] Feb 18 19:47:12 crc kubenswrapper[4932]: E0218 19:47:12.643765 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="extract-utilities" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.643934 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="extract-utilities" Feb 18 19:47:12 crc kubenswrapper[4932]: E0218 19:47:12.644049 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="extract-content" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.644157 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="extract-content" Feb 18 19:47:12 crc kubenswrapper[4932]: E0218 19:47:12.644334 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="registry-server" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.644435 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="registry-server" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.644754 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="registry-server" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.645828 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.649672 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pzp9s" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.659782 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm"] Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.661615 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.665695 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.667815 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-mtshd"] Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.685109 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-dktbf"] Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.686231 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.700019 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm"] Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.765220 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj96x\" (UniqueName: \"kubernetes.io/projected/df77ffd7-3518-44da-b978-578bfb225ede-kube-api-access-mj96x\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.765275 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-dbus-socket\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.765303 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms4gm\" (UniqueName: \"kubernetes.io/projected/c87a209a-d7a2-4615-87b9-e8c9ec5a8b91-kube-api-access-ms4gm\") pod \"nmstate-metrics-58c85c668d-mtshd\" (UID: \"c87a209a-d7a2-4615-87b9-e8c9ec5a8b91\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.765320 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3dc40b19-6391-4590-9ffd-820e3e865431-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bm8xm\" (UID: \"3dc40b19-6391-4590-9ffd-820e3e865431\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.765335 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-nmstate-lock\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.765349 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-ovs-socket\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.765421 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j58rj\" (UniqueName: \"kubernetes.io/projected/3dc40b19-6391-4590-9ffd-820e3e865431-kube-api-access-j58rj\") pod \"nmstate-webhook-866bcb46dc-bm8xm\" (UID: \"3dc40b19-6391-4590-9ffd-820e3e865431\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.784998 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl"] Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.785656 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.788928 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.789143 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hzhgz" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.789270 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.799523 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl"] Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866476 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j58rj\" (UniqueName: \"kubernetes.io/projected/3dc40b19-6391-4590-9ffd-820e3e865431-kube-api-access-j58rj\") pod \"nmstate-webhook-866bcb46dc-bm8xm\" (UID: \"3dc40b19-6391-4590-9ffd-820e3e865431\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866540 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckntg\" (UniqueName: \"kubernetes.io/projected/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-kube-api-access-ckntg\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866564 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866586 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj96x\" (UniqueName: \"kubernetes.io/projected/df77ffd7-3518-44da-b978-578bfb225ede-kube-api-access-mj96x\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866609 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-dbus-socket\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866634 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms4gm\" (UniqueName: \"kubernetes.io/projected/c87a209a-d7a2-4615-87b9-e8c9ec5a8b91-kube-api-access-ms4gm\") pod \"nmstate-metrics-58c85c668d-mtshd\" (UID: \"c87a209a-d7a2-4615-87b9-e8c9ec5a8b91\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866647 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3dc40b19-6391-4590-9ffd-820e3e865431-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bm8xm\" (UID: \"3dc40b19-6391-4590-9ffd-820e3e865431\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866662 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866681 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-nmstate-lock\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866695 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-ovs-socket\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866772 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-ovs-socket\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.867399 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-dbus-socket\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: E0218 19:47:12.867574 4932 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 18 19:47:12 crc kubenswrapper[4932]: E0218 19:47:12.867626 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dc40b19-6391-4590-9ffd-820e3e865431-tls-key-pair podName:3dc40b19-6391-4590-9ffd-820e3e865431 nodeName:}" failed. No retries permitted until 2026-02-18 19:47:13.367601028 +0000 UTC m=+796.949555873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/3dc40b19-6391-4590-9ffd-820e3e865431-tls-key-pair") pod "nmstate-webhook-866bcb46dc-bm8xm" (UID: "3dc40b19-6391-4590-9ffd-820e3e865431") : secret "openshift-nmstate-webhook" not found Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.867747 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-nmstate-lock\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.902023 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms4gm\" (UniqueName: \"kubernetes.io/projected/c87a209a-d7a2-4615-87b9-e8c9ec5a8b91-kube-api-access-ms4gm\") pod \"nmstate-metrics-58c85c668d-mtshd\" (UID: \"c87a209a-d7a2-4615-87b9-e8c9ec5a8b91\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.918024 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j58rj\" (UniqueName: \"kubernetes.io/projected/3dc40b19-6391-4590-9ffd-820e3e865431-kube-api-access-j58rj\") pod \"nmstate-webhook-866bcb46dc-bm8xm\" (UID: \"3dc40b19-6391-4590-9ffd-820e3e865431\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.931822 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj96x\" (UniqueName: \"kubernetes.io/projected/df77ffd7-3518-44da-b978-578bfb225ede-kube-api-access-mj96x\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.968215 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.968319 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.968340 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckntg\" (UniqueName: \"kubernetes.io/projected/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-kube-api-access-ckntg\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.969971 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.970091 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.972527 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.014598 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.015001 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckntg\" (UniqueName: \"kubernetes.io/projected/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-kube-api-access-ckntg\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:13 crc kubenswrapper[4932]: W0218 19:47:13.071599 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf77ffd7_3518_44da_b978_578bfb225ede.slice/crio-0ea5b4d536e26104daddea7f13d102f6be73a7f45309ea8bea561dbaae21ef15 WatchSource:0}: Error finding container 0ea5b4d536e26104daddea7f13d102f6be73a7f45309ea8bea561dbaae21ef15: Status 404 returned error can't find the container with id 0ea5b4d536e26104daddea7f13d102f6be73a7f45309ea8bea561dbaae21ef15 Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.103599 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.154964 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69545748b6-t8skh"] Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.156016 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.174905 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69545748b6-t8skh"] Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.203132 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" path="/var/lib/kubelet/pods/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6/volumes" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.235899 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-mtshd"] Feb 18 19:47:13 crc kubenswrapper[4932]: W0218 19:47:13.240799 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc87a209a_d7a2_4615_87b9_e8c9ec5a8b91.slice/crio-0a07372d4e55918d3ab8e5ba51af58879e9be5c5c0f133591b04d87def796c80 WatchSource:0}: Error finding container 0a07372d4e55918d3ab8e5ba51af58879e9be5c5c0f133591b04d87def796c80: Status 404 returned error can't find the container with id 0a07372d4e55918d3ab8e5ba51af58879e9be5c5c0f133591b04d87def796c80 Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.272517 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-config\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.272633 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-trusted-ca-bundle\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.272680 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-oauth-serving-cert\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.272709 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-serving-cert\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.272744 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qss7\" (UniqueName: \"kubernetes.io/projected/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-kube-api-access-2qss7\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.272848 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-service-ca\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.272935 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-oauth-config\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.363963 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl"] Feb 18 19:47:13 crc kubenswrapper[4932]: W0218 19:47:13.371578 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cca088e_edc8_4ce7_9ce4_e0561b2576e3.slice/crio-70eedf3bdb90c3c4529a99cff98591e2423fd7ff508ae2f5b576fcd66340bb0d WatchSource:0}: Error finding container 70eedf3bdb90c3c4529a99cff98591e2423fd7ff508ae2f5b576fcd66340bb0d: Status 404 returned error can't find the container with id 70eedf3bdb90c3c4529a99cff98591e2423fd7ff508ae2f5b576fcd66340bb0d Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373465 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qss7\" (UniqueName: \"kubernetes.io/projected/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-kube-api-access-2qss7\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373523 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-service-ca\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373565 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-oauth-config\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373608 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-config\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373654 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3dc40b19-6391-4590-9ffd-820e3e865431-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bm8xm\" (UID: \"3dc40b19-6391-4590-9ffd-820e3e865431\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373678 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-trusted-ca-bundle\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373705 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-oauth-serving-cert\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373733 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-serving-cert\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.374635 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-config\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.374725 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-trusted-ca-bundle\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.374980 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-service-ca\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.375365 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-oauth-serving-cert\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.380726 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-serving-cert\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.380746 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3dc40b19-6391-4590-9ffd-820e3e865431-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bm8xm\" (UID: \"3dc40b19-6391-4590-9ffd-820e3e865431\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.381212 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-oauth-config\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.388542 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qss7\" (UniqueName: \"kubernetes.io/projected/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-kube-api-access-2qss7\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.439016 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brmkq"] Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.439296 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-brmkq" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="registry-server" containerID="cri-o://06c72665858ea405b5bdd8f16cf7d3063d79f619bef00c595414e4690a8411ee" gracePeriod=2 Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.498951 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.585722 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.632887 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" event={"ID":"c87a209a-d7a2-4615-87b9-e8c9ec5a8b91","Type":"ContainerStarted","Data":"0a07372d4e55918d3ab8e5ba51af58879e9be5c5c0f133591b04d87def796c80"} Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.634997 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" event={"ID":"2cca088e-edc8-4ce7-9ce4-e0561b2576e3","Type":"ContainerStarted","Data":"70eedf3bdb90c3c4529a99cff98591e2423fd7ff508ae2f5b576fcd66340bb0d"} Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.637676 4932 generic.go:334] "Generic (PLEG): container finished" podID="48955357-bac8-4bc1-80f8-939c59861c52" containerID="06c72665858ea405b5bdd8f16cf7d3063d79f619bef00c595414e4690a8411ee" exitCode=0 Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.637740 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brmkq" event={"ID":"48955357-bac8-4bc1-80f8-939c59861c52","Type":"ContainerDied","Data":"06c72665858ea405b5bdd8f16cf7d3063d79f619bef00c595414e4690a8411ee"} Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.639022 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dktbf" event={"ID":"df77ffd7-3518-44da-b978-578bfb225ede","Type":"ContainerStarted","Data":"0ea5b4d536e26104daddea7f13d102f6be73a7f45309ea8bea561dbaae21ef15"} Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.698437 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69545748b6-t8skh"] Feb 18 19:47:13 crc kubenswrapper[4932]: W0218 19:47:13.706453 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f7d8b2e_aed4_4db7_98ea_5226f18411a7.slice/crio-1c8c716c6c79b278f99098d817e770c6ef1e72145893a76dce99f943ac5aa51c WatchSource:0}: Error finding container 1c8c716c6c79b278f99098d817e770c6ef1e72145893a76dce99f943ac5aa51c: Status 404 returned error can't find the container with id 1c8c716c6c79b278f99098d817e770c6ef1e72145893a76dce99f943ac5aa51c Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.817003 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.881098 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-utilities\") pod \"48955357-bac8-4bc1-80f8-939c59861c52\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.881475 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn2n2\" (UniqueName: \"kubernetes.io/projected/48955357-bac8-4bc1-80f8-939c59861c52-kube-api-access-kn2n2\") pod \"48955357-bac8-4bc1-80f8-939c59861c52\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.881551 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-catalog-content\") pod \"48955357-bac8-4bc1-80f8-939c59861c52\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.882346 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-utilities" (OuterVolumeSpecName: "utilities") pod "48955357-bac8-4bc1-80f8-939c59861c52" (UID: "48955357-bac8-4bc1-80f8-939c59861c52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.887377 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48955357-bac8-4bc1-80f8-939c59861c52-kube-api-access-kn2n2" (OuterVolumeSpecName: "kube-api-access-kn2n2") pod "48955357-bac8-4bc1-80f8-939c59861c52" (UID: "48955357-bac8-4bc1-80f8-939c59861c52"). InnerVolumeSpecName "kube-api-access-kn2n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.982866 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn2n2\" (UniqueName: \"kubernetes.io/projected/48955357-bac8-4bc1-80f8-939c59861c52-kube-api-access-kn2n2\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.982899 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.016290 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm"] Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.018466 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48955357-bac8-4bc1-80f8-939c59861c52" (UID: "48955357-bac8-4bc1-80f8-939c59861c52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:14 crc kubenswrapper[4932]: W0218 19:47:14.024771 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dc40b19_6391_4590_9ffd_820e3e865431.slice/crio-c1df7aab5b0b0d9fc7f610a16632ba1d48edfcc9bac4a4371f6e9c4448e1d75c WatchSource:0}: Error finding container c1df7aab5b0b0d9fc7f610a16632ba1d48edfcc9bac4a4371f6e9c4448e1d75c: Status 404 returned error can't find the container with id c1df7aab5b0b0d9fc7f610a16632ba1d48edfcc9bac4a4371f6e9c4448e1d75c Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.083987 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.647355 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.647355 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brmkq" event={"ID":"48955357-bac8-4bc1-80f8-939c59861c52","Type":"ContainerDied","Data":"d674cd5ee7e951ca323d8c8395fb5893fb353533a055e7722bd24a4a5c045733"} Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.647799 4932 scope.go:117] "RemoveContainer" containerID="06c72665858ea405b5bdd8f16cf7d3063d79f619bef00c595414e4690a8411ee" Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.648677 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69545748b6-t8skh" event={"ID":"3f7d8b2e-aed4-4db7-98ea-5226f18411a7","Type":"ContainerStarted","Data":"b7dc11f14fa6aebbc14ff387ecb9a2cb207823a3d53e2ca6bda864cb7b4b9e16"} Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.648716 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69545748b6-t8skh" event={"ID":"3f7d8b2e-aed4-4db7-98ea-5226f18411a7","Type":"ContainerStarted","Data":"1c8c716c6c79b278f99098d817e770c6ef1e72145893a76dce99f943ac5aa51c"} Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.649821 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" event={"ID":"3dc40b19-6391-4590-9ffd-820e3e865431","Type":"ContainerStarted","Data":"c1df7aab5b0b0d9fc7f610a16632ba1d48edfcc9bac4a4371f6e9c4448e1d75c"} Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.669066 4932 scope.go:117] "RemoveContainer" containerID="da0114a0a1163416b252518f5f7b35cd4051a8a02afdb5d42f047eeb519dcee4" Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.679440 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69545748b6-t8skh" podStartSLOduration=1.679406674 podStartE2EDuration="1.679406674s" podCreationTimestamp="2026-02-18 19:47:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:47:14.667710595 +0000 UTC m=+798.249665440" watchObservedRunningTime="2026-02-18 19:47:14.679406674 +0000 UTC m=+798.261361549" Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.691143 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brmkq"] Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.695123 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-brmkq"] Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.719106 4932 scope.go:117] "RemoveContainer" containerID="297851fa0e23edca20383a70a1a308b0693ebe352c76ce53a3ade9506c01c89a" Feb 18 19:47:15 crc kubenswrapper[4932]: I0218 19:47:15.185970 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48955357-bac8-4bc1-80f8-939c59861c52" path="/var/lib/kubelet/pods/48955357-bac8-4bc1-80f8-939c59861c52/volumes" Feb 18 19:47:16 crc kubenswrapper[4932]: I0218 19:47:16.691648 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" event={"ID":"3dc40b19-6391-4590-9ffd-820e3e865431","Type":"ContainerStarted","Data":"ac1aa61dedb6411726ebb230a5b291e86bcaede72eb93235a8bd412981c79c96"} Feb 18 19:47:16 crc kubenswrapper[4932]: I0218 19:47:16.692119 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:16 crc kubenswrapper[4932]: I0218 19:47:16.693258 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dktbf" event={"ID":"df77ffd7-3518-44da-b978-578bfb225ede","Type":"ContainerStarted","Data":"a97b21848400f2c5eb4a9c371a397bde571296c3bc3efec1ee973727e41a264b"} Feb 18 19:47:16 crc kubenswrapper[4932]: I0218 19:47:16.693374 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:16 crc kubenswrapper[4932]: I0218 19:47:16.695772 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" event={"ID":"c87a209a-d7a2-4615-87b9-e8c9ec5a8b91","Type":"ContainerStarted","Data":"d459f3f8ae2e716ea8ba1253e78ec6c5c664043541e7d60da754ef158668de73"} Feb 18 19:47:16 crc kubenswrapper[4932]: I0218 19:47:16.721324 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" podStartSLOduration=2.587090027 podStartE2EDuration="4.721293497s" podCreationTimestamp="2026-02-18 19:47:12 +0000 UTC" firstStartedPulling="2026-02-18 19:47:14.0278031 +0000 UTC m=+797.609757945" lastFinishedPulling="2026-02-18 19:47:16.16200656 +0000 UTC m=+799.743961415" observedRunningTime="2026-02-18 19:47:16.712812558 +0000 UTC m=+800.294767453" watchObservedRunningTime="2026-02-18 19:47:16.721293497 +0000 UTC m=+800.303248782" Feb 18 19:47:16 crc kubenswrapper[4932]: I0218 19:47:16.730369 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-dktbf" podStartSLOduration=1.687606878 podStartE2EDuration="4.730350451s" podCreationTimestamp="2026-02-18 19:47:12 +0000 UTC" firstStartedPulling="2026-02-18 19:47:13.073710433 +0000 UTC m=+796.655665278" lastFinishedPulling="2026-02-18 19:47:16.116454006 +0000 UTC m=+799.698408851" observedRunningTime="2026-02-18 19:47:16.730060653 +0000 UTC m=+800.312015508" watchObservedRunningTime="2026-02-18 19:47:16.730350451 +0000 UTC m=+800.312305306" Feb 18 19:47:17 crc kubenswrapper[4932]: I0218 19:47:17.705124 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" event={"ID":"2cca088e-edc8-4ce7-9ce4-e0561b2576e3","Type":"ContainerStarted","Data":"97252ad2b4caf0138455b3bf0e0dbc5147313ac1c51a11fe95e90b726143239b"} Feb 18 19:47:17 crc kubenswrapper[4932]: I0218 19:47:17.729910 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" podStartSLOduration=1.885844648 podStartE2EDuration="5.729889619s" podCreationTimestamp="2026-02-18 19:47:12 +0000 UTC" firstStartedPulling="2026-02-18 19:47:13.373299654 +0000 UTC m=+796.955254499" lastFinishedPulling="2026-02-18 19:47:17.217344625 +0000 UTC m=+800.799299470" observedRunningTime="2026-02-18 19:47:17.727240204 +0000 UTC m=+801.309195059" watchObservedRunningTime="2026-02-18 19:47:17.729889619 +0000 UTC m=+801.311844504" Feb 18 19:47:18 crc kubenswrapper[4932]: I0218 19:47:18.715789 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" event={"ID":"c87a209a-d7a2-4615-87b9-e8c9ec5a8b91","Type":"ContainerStarted","Data":"e11ac389b1643b96b712d27ac370f21963d6190bb0419ed092d0d691bf04c80d"} Feb 18 19:47:23 crc kubenswrapper[4932]: I0218 19:47:23.052156 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:23 crc kubenswrapper[4932]: I0218 19:47:23.090118 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" podStartSLOduration=5.948705127 podStartE2EDuration="11.090087002s" podCreationTimestamp="2026-02-18 19:47:12 +0000 UTC" firstStartedPulling="2026-02-18 19:47:13.243627375 +0000 UTC m=+796.825582220" lastFinishedPulling="2026-02-18 19:47:18.38500925 +0000 UTC m=+801.966964095" observedRunningTime="2026-02-18 19:47:18.743628927 +0000 UTC m=+802.325583802" watchObservedRunningTime="2026-02-18 19:47:23.090087002 +0000 UTC m=+806.672041917" Feb 18 19:47:23 crc kubenswrapper[4932]: I0218 19:47:23.499484 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:23 crc kubenswrapper[4932]: I0218 19:47:23.500499 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:23 crc kubenswrapper[4932]: I0218 19:47:23.509703 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:23 crc kubenswrapper[4932]: I0218 19:47:23.761717 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:23 crc kubenswrapper[4932]: I0218 19:47:23.848135 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fgjll"] Feb 18 19:47:33 crc kubenswrapper[4932]: I0218 19:47:33.591828 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:48 crc kubenswrapper[4932]: I0218 19:47:48.893956 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fgjll" podUID="f9f46b79-f300-42de-a2c3-a35670822a3b" containerName="console" containerID="cri-o://fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c" gracePeriod=15 Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.319110 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5"] Feb 18 19:47:49 crc kubenswrapper[4932]: E0218 19:47:49.319731 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="extract-content" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.319747 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="extract-content" Feb 18 19:47:49 crc kubenswrapper[4932]: E0218 19:47:49.319768 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="extract-utilities" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.319777 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="extract-utilities" Feb 18 19:47:49 crc kubenswrapper[4932]: E0218 19:47:49.319792 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="registry-server" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.319803 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="registry-server" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.319942 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="registry-server" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.320909 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.322620 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.327074 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5"] Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.342519 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fgjll_f9f46b79-f300-42de-a2c3-a35670822a3b/console/0.log" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.342795 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.366521 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx42h\" (UniqueName: \"kubernetes.io/projected/f9f46b79-f300-42de-a2c3-a35670822a3b-kube-api-access-mx42h\") pod \"f9f46b79-f300-42de-a2c3-a35670822a3b\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.366687 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.366730 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.366773 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w59gk\" (UniqueName: \"kubernetes.io/projected/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-kube-api-access-w59gk\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.372754 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f46b79-f300-42de-a2c3-a35670822a3b-kube-api-access-mx42h" (OuterVolumeSpecName: "kube-api-access-mx42h") pod "f9f46b79-f300-42de-a2c3-a35670822a3b" (UID: "f9f46b79-f300-42de-a2c3-a35670822a3b"). InnerVolumeSpecName "kube-api-access-mx42h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.467730 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-serving-cert\") pod \"f9f46b79-f300-42de-a2c3-a35670822a3b\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.467767 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-console-config\") pod \"f9f46b79-f300-42de-a2c3-a35670822a3b\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.467804 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-trusted-ca-bundle\") pod \"f9f46b79-f300-42de-a2c3-a35670822a3b\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.467833 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-service-ca\") pod \"f9f46b79-f300-42de-a2c3-a35670822a3b\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.467883 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-oauth-serving-cert\") pod \"f9f46b79-f300-42de-a2c3-a35670822a3b\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.467899 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-oauth-config\") pod \"f9f46b79-f300-42de-a2c3-a35670822a3b\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.468057 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.468089 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.468126 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w59gk\" (UniqueName: \"kubernetes.io/projected/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-kube-api-access-w59gk\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.468163 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx42h\" (UniqueName: \"kubernetes.io/projected/f9f46b79-f300-42de-a2c3-a35670822a3b-kube-api-access-mx42h\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.469843 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-console-config" (OuterVolumeSpecName: "console-config") pod "f9f46b79-f300-42de-a2c3-a35670822a3b" (UID: "f9f46b79-f300-42de-a2c3-a35670822a3b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.469855 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f9f46b79-f300-42de-a2c3-a35670822a3b" (UID: "f9f46b79-f300-42de-a2c3-a35670822a3b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.469893 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-service-ca" (OuterVolumeSpecName: "service-ca") pod "f9f46b79-f300-42de-a2c3-a35670822a3b" (UID: "f9f46b79-f300-42de-a2c3-a35670822a3b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.470388 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f9f46b79-f300-42de-a2c3-a35670822a3b" (UID: "f9f46b79-f300-42de-a2c3-a35670822a3b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.470397 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.470664 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.472858 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f9f46b79-f300-42de-a2c3-a35670822a3b" (UID: "f9f46b79-f300-42de-a2c3-a35670822a3b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.481406 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f9f46b79-f300-42de-a2c3-a35670822a3b" (UID: "f9f46b79-f300-42de-a2c3-a35670822a3b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.484471 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w59gk\" (UniqueName: \"kubernetes.io/projected/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-kube-api-access-w59gk\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.569102 4932 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.569386 4932 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.569467 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.569538 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.569965 4932 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.570076 4932 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.655942 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.889990 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5"] Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.950696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" event={"ID":"1a7b21b3-8c0f-4904-8cee-63e55c2e1511","Type":"ContainerStarted","Data":"c38428a6e3b21c78c6d9d3924af8a505c31ccc899e94a488d5cc5f28d383bfc9"} Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.952411 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fgjll_f9f46b79-f300-42de-a2c3-a35670822a3b/console/0.log" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.952446 4932 generic.go:334] "Generic (PLEG): container finished" podID="f9f46b79-f300-42de-a2c3-a35670822a3b" containerID="fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c" exitCode=2 Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.952467 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgjll" event={"ID":"f9f46b79-f300-42de-a2c3-a35670822a3b","Type":"ContainerDied","Data":"fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c"} Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.952482 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgjll" event={"ID":"f9f46b79-f300-42de-a2c3-a35670822a3b","Type":"ContainerDied","Data":"a258bd567aafbecb3f6618d81a779cce26f985331e18b4b996cf0d535bef2a19"} Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.952497 4932 scope.go:117] "RemoveContainer" containerID="fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.952593 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.987671 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fgjll"] Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.988005 4932 scope.go:117] "RemoveContainer" containerID="fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c" Feb 18 19:47:49 crc kubenswrapper[4932]: E0218 19:47:49.988511 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c\": container with ID starting with fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c not found: ID does not exist" containerID="fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.988556 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c"} err="failed to get container status \"fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c\": rpc error: code = NotFound desc = could not find container \"fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c\": container with ID starting with fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c not found: ID does not exist" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.992636 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fgjll"] Feb 18 19:47:50 crc kubenswrapper[4932]: I0218 19:47:50.966603 4932 generic.go:334] "Generic (PLEG): container finished" podID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerID="f51a4d3e89d0a5a8fd82a1fed44e1aefc0430abe321fe905050ced3e2abf82fb" exitCode=0 Feb 18 19:47:50 crc kubenswrapper[4932]: I0218 19:47:50.966706 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" event={"ID":"1a7b21b3-8c0f-4904-8cee-63e55c2e1511","Type":"ContainerDied","Data":"f51a4d3e89d0a5a8fd82a1fed44e1aefc0430abe321fe905050ced3e2abf82fb"} Feb 18 19:47:51 crc kubenswrapper[4932]: I0218 19:47:51.192705 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f46b79-f300-42de-a2c3-a35670822a3b" path="/var/lib/kubelet/pods/f9f46b79-f300-42de-a2c3-a35670822a3b/volumes" Feb 18 19:47:52 crc kubenswrapper[4932]: I0218 19:47:52.987847 4932 generic.go:334] "Generic (PLEG): container finished" podID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerID="cb3448978aad8e47dae129c8a579b242cfd73b2277d5823d432278a0447baf5e" exitCode=0 Feb 18 19:47:52 crc kubenswrapper[4932]: I0218 19:47:52.987925 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" event={"ID":"1a7b21b3-8c0f-4904-8cee-63e55c2e1511","Type":"ContainerDied","Data":"cb3448978aad8e47dae129c8a579b242cfd73b2277d5823d432278a0447baf5e"} Feb 18 19:47:53 crc kubenswrapper[4932]: I0218 19:47:53.997024 4932 generic.go:334] "Generic (PLEG): container finished" podID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerID="793ad19debb6605cbb46965ff2a638a15cb70262a2b339958167435302ce32c2" exitCode=0 Feb 18 19:47:53 crc kubenswrapper[4932]: I0218 19:47:53.997089 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" event={"ID":"1a7b21b3-8c0f-4904-8cee-63e55c2e1511","Type":"ContainerDied","Data":"793ad19debb6605cbb46965ff2a638a15cb70262a2b339958167435302ce32c2"} Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.225597 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.262270 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w59gk\" (UniqueName: \"kubernetes.io/projected/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-kube-api-access-w59gk\") pod \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.262359 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-bundle\") pod \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.262479 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-util\") pod \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.264074 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-bundle" (OuterVolumeSpecName: "bundle") pod "1a7b21b3-8c0f-4904-8cee-63e55c2e1511" (UID: "1a7b21b3-8c0f-4904-8cee-63e55c2e1511"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.271403 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-kube-api-access-w59gk" (OuterVolumeSpecName: "kube-api-access-w59gk") pod "1a7b21b3-8c0f-4904-8cee-63e55c2e1511" (UID: "1a7b21b3-8c0f-4904-8cee-63e55c2e1511"). InnerVolumeSpecName "kube-api-access-w59gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.278153 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-util" (OuterVolumeSpecName: "util") pod "1a7b21b3-8c0f-4904-8cee-63e55c2e1511" (UID: "1a7b21b3-8c0f-4904-8cee-63e55c2e1511"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.365023 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w59gk\" (UniqueName: \"kubernetes.io/projected/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-kube-api-access-w59gk\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.365051 4932 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.365061 4932 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:56 crc kubenswrapper[4932]: I0218 19:47:56.013622 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" event={"ID":"1a7b21b3-8c0f-4904-8cee-63e55c2e1511","Type":"ContainerDied","Data":"c38428a6e3b21c78c6d9d3924af8a505c31ccc899e94a488d5cc5f28d383bfc9"} Feb 18 19:47:56 crc kubenswrapper[4932]: I0218 19:47:56.013667 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c38428a6e3b21c78c6d9d3924af8a505c31ccc899e94a488d5cc5f28d383bfc9" Feb 18 19:47:56 crc kubenswrapper[4932]: I0218 19:47:56.013696 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.571990 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4"] Feb 18 19:48:05 crc kubenswrapper[4932]: E0218 19:48:05.572897 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerName="pull" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.572914 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerName="pull" Feb 18 19:48:05 crc kubenswrapper[4932]: E0218 19:48:05.572949 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerName="util" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.572958 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerName="util" Feb 18 19:48:05 crc kubenswrapper[4932]: E0218 19:48:05.572975 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerName="extract" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.572982 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerName="extract" Feb 18 19:48:05 crc kubenswrapper[4932]: E0218 19:48:05.572996 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f46b79-f300-42de-a2c3-a35670822a3b" containerName="console" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.573003 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f46b79-f300-42de-a2c3-a35670822a3b" containerName="console" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.573199 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9f46b79-f300-42de-a2c3-a35670822a3b" containerName="console" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.573218 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerName="extract" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.573715 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.576117 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-dml6q" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.576783 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.576905 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.577008 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.577290 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.582910 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4"] Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.594567 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqd9c\" (UniqueName: \"kubernetes.io/projected/840c5b86-35ae-4432-9352-c830b6034aaf-kube-api-access-pqd9c\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.594644 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/840c5b86-35ae-4432-9352-c830b6034aaf-webhook-cert\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.594693 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/840c5b86-35ae-4432-9352-c830b6034aaf-apiservice-cert\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.695553 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqd9c\" (UniqueName: \"kubernetes.io/projected/840c5b86-35ae-4432-9352-c830b6034aaf-kube-api-access-pqd9c\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.695601 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/840c5b86-35ae-4432-9352-c830b6034aaf-webhook-cert\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.695630 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/840c5b86-35ae-4432-9352-c830b6034aaf-apiservice-cert\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.703873 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/840c5b86-35ae-4432-9352-c830b6034aaf-apiservice-cert\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.703909 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/840c5b86-35ae-4432-9352-c830b6034aaf-webhook-cert\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.717688 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqd9c\" (UniqueName: \"kubernetes.io/projected/840c5b86-35ae-4432-9352-c830b6034aaf-kube-api-access-pqd9c\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.889658 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.033433 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd"] Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.034319 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.040745 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.040767 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ds2x2" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.040953 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.053964 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd"] Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.100393 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmr6w\" (UniqueName: \"kubernetes.io/projected/c6536847-a5f8-42e0-9493-a016c3f8b53f-kube-api-access-vmr6w\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.100467 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6536847-a5f8-42e0-9493-a016c3f8b53f-apiservice-cert\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.100494 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6536847-a5f8-42e0-9493-a016c3f8b53f-webhook-cert\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.202142 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6536847-a5f8-42e0-9493-a016c3f8b53f-apiservice-cert\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.202205 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6536847-a5f8-42e0-9493-a016c3f8b53f-webhook-cert\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.206246 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6536847-a5f8-42e0-9493-a016c3f8b53f-apiservice-cert\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.215789 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6536847-a5f8-42e0-9493-a016c3f8b53f-webhook-cert\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.228295 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmr6w\" (UniqueName: \"kubernetes.io/projected/c6536847-a5f8-42e0-9493-a016c3f8b53f-kube-api-access-vmr6w\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.245238 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmr6w\" (UniqueName: \"kubernetes.io/projected/c6536847-a5f8-42e0-9493-a016c3f8b53f-kube-api-access-vmr6w\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.350988 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.560160 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4"] Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.856204 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd"] Feb 18 19:48:07 crc kubenswrapper[4932]: I0218 19:48:07.081986 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" event={"ID":"840c5b86-35ae-4432-9352-c830b6034aaf","Type":"ContainerStarted","Data":"e7b58ae27127882aabbdd8991b71be5274f019a48e6236e98d69aab35e4ae6cd"} Feb 18 19:48:07 crc kubenswrapper[4932]: I0218 19:48:07.083754 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" event={"ID":"c6536847-a5f8-42e0-9493-a016c3f8b53f","Type":"ContainerStarted","Data":"a3c628572593a80b5070242a20e0978a51cfe973cbffc4ab07aa442f145e5e25"} Feb 18 19:48:12 crc kubenswrapper[4932]: I0218 19:48:12.123902 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" event={"ID":"840c5b86-35ae-4432-9352-c830b6034aaf","Type":"ContainerStarted","Data":"a7859407860e12d75b4172291d51849ca03e8035e97d61473bbdaae55505d47e"} Feb 18 19:48:12 crc kubenswrapper[4932]: I0218 19:48:12.124482 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:12 crc kubenswrapper[4932]: I0218 19:48:12.155387 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" podStartSLOduration=1.9860522280000001 podStartE2EDuration="7.155351558s" podCreationTimestamp="2026-02-18 19:48:05 +0000 UTC" firstStartedPulling="2026-02-18 19:48:06.575794331 +0000 UTC m=+850.157749176" lastFinishedPulling="2026-02-18 19:48:11.745093661 +0000 UTC m=+855.327048506" observedRunningTime="2026-02-18 19:48:12.148211402 +0000 UTC m=+855.730166287" watchObservedRunningTime="2026-02-18 19:48:12.155351558 +0000 UTC m=+855.737306423" Feb 18 19:48:13 crc kubenswrapper[4932]: I0218 19:48:13.131151 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" event={"ID":"c6536847-a5f8-42e0-9493-a016c3f8b53f","Type":"ContainerStarted","Data":"d2a18840a94e142760a0d0f0872aee1b013b1f8ca1c49a89447adb1fe93d4942"} Feb 18 19:48:13 crc kubenswrapper[4932]: I0218 19:48:13.131237 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:13 crc kubenswrapper[4932]: I0218 19:48:13.159270 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" podStartSLOduration=1.266925704 podStartE2EDuration="7.159250423s" podCreationTimestamp="2026-02-18 19:48:06 +0000 UTC" firstStartedPulling="2026-02-18 19:48:06.862248255 +0000 UTC m=+850.444203100" lastFinishedPulling="2026-02-18 19:48:12.754572974 +0000 UTC m=+856.336527819" observedRunningTime="2026-02-18 19:48:13.157053829 +0000 UTC m=+856.739008674" watchObservedRunningTime="2026-02-18 19:48:13.159250423 +0000 UTC m=+856.741205268" Feb 18 19:48:19 crc kubenswrapper[4932]: I0218 19:48:19.865685 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9s8nd"] Feb 18 19:48:19 crc kubenswrapper[4932]: I0218 19:48:19.867410 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:19 crc kubenswrapper[4932]: I0218 19:48:19.882617 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9s8nd"] Feb 18 19:48:19 crc kubenswrapper[4932]: I0218 19:48:19.905835 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-catalog-content\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:19 crc kubenswrapper[4932]: I0218 19:48:19.905892 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-utilities\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:19 crc kubenswrapper[4932]: I0218 19:48:19.905921 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjm7t\" (UniqueName: \"kubernetes.io/projected/ba1a775b-f93a-44fb-8588-9088a479826f-kube-api-access-cjm7t\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.007327 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-catalog-content\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.007386 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-utilities\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.007423 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjm7t\" (UniqueName: \"kubernetes.io/projected/ba1a775b-f93a-44fb-8588-9088a479826f-kube-api-access-cjm7t\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.008189 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-catalog-content\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.008431 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-utilities\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.025812 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjm7t\" (UniqueName: \"kubernetes.io/projected/ba1a775b-f93a-44fb-8588-9088a479826f-kube-api-access-cjm7t\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.185761 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.769317 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9s8nd"] Feb 18 19:48:21 crc kubenswrapper[4932]: I0218 19:48:21.191035 4932 generic.go:334] "Generic (PLEG): container finished" podID="ba1a775b-f93a-44fb-8588-9088a479826f" containerID="9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290" exitCode=0 Feb 18 19:48:21 crc kubenswrapper[4932]: I0218 19:48:21.191080 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s8nd" event={"ID":"ba1a775b-f93a-44fb-8588-9088a479826f","Type":"ContainerDied","Data":"9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290"} Feb 18 19:48:21 crc kubenswrapper[4932]: I0218 19:48:21.191111 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s8nd" event={"ID":"ba1a775b-f93a-44fb-8588-9088a479826f","Type":"ContainerStarted","Data":"06075c9ddc5258f34d00a13d1ce1cf80729081c6aedfbee9dcd4bb5fc15000c0"} Feb 18 19:48:22 crc kubenswrapper[4932]: I0218 19:48:22.198058 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s8nd" event={"ID":"ba1a775b-f93a-44fb-8588-9088a479826f","Type":"ContainerStarted","Data":"adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6"} Feb 18 19:48:23 crc kubenswrapper[4932]: I0218 19:48:23.205611 4932 generic.go:334] "Generic (PLEG): container finished" podID="ba1a775b-f93a-44fb-8588-9088a479826f" containerID="adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6" exitCode=0 Feb 18 19:48:23 crc kubenswrapper[4932]: I0218 19:48:23.205965 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s8nd" event={"ID":"ba1a775b-f93a-44fb-8588-9088a479826f","Type":"ContainerDied","Data":"adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6"} Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.219900 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s8nd" event={"ID":"ba1a775b-f93a-44fb-8588-9088a479826f","Type":"ContainerStarted","Data":"f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880"} Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.243535 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9s8nd" podStartSLOduration=3.127745989 podStartE2EDuration="6.243516041s" podCreationTimestamp="2026-02-18 19:48:19 +0000 UTC" firstStartedPulling="2026-02-18 19:48:21.192226899 +0000 UTC m=+864.774181734" lastFinishedPulling="2026-02-18 19:48:24.307996931 +0000 UTC m=+867.889951786" observedRunningTime="2026-02-18 19:48:25.240724032 +0000 UTC m=+868.822678877" watchObservedRunningTime="2026-02-18 19:48:25.243516041 +0000 UTC m=+868.825470896" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.662677 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q5rjw"] Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.663855 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.711460 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5rjw"] Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.775720 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-utilities\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.775843 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjl62\" (UniqueName: \"kubernetes.io/projected/743a9d5a-33ac-4937-a081-195105ed16b3-kube-api-access-pjl62\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.775894 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-catalog-content\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.877311 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjl62\" (UniqueName: \"kubernetes.io/projected/743a9d5a-33ac-4937-a081-195105ed16b3-kube-api-access-pjl62\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.877352 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-catalog-content\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.877420 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-utilities\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.877837 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-utilities\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.877951 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-catalog-content\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.893798 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjl62\" (UniqueName: \"kubernetes.io/projected/743a9d5a-33ac-4937-a081-195105ed16b3-kube-api-access-pjl62\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.977687 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:26 crc kubenswrapper[4932]: I0218 19:48:26.244281 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5rjw"] Feb 18 19:48:26 crc kubenswrapper[4932]: I0218 19:48:26.355468 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:27 crc kubenswrapper[4932]: I0218 19:48:27.238509 4932 generic.go:334] "Generic (PLEG): container finished" podID="743a9d5a-33ac-4937-a081-195105ed16b3" containerID="3604a684daca1b21cf880f0d994d084bf3376d4f9eda7835931e9e96b1c4b9ef" exitCode=0 Feb 18 19:48:27 crc kubenswrapper[4932]: I0218 19:48:27.238631 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5rjw" event={"ID":"743a9d5a-33ac-4937-a081-195105ed16b3","Type":"ContainerDied","Data":"3604a684daca1b21cf880f0d994d084bf3376d4f9eda7835931e9e96b1c4b9ef"} Feb 18 19:48:27 crc kubenswrapper[4932]: I0218 19:48:27.238959 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5rjw" event={"ID":"743a9d5a-33ac-4937-a081-195105ed16b3","Type":"ContainerStarted","Data":"b49be4835d48c0d3c2ee538792e112030676109057e86ba80e039cfaad394592"} Feb 18 19:48:29 crc kubenswrapper[4932]: I0218 19:48:29.258656 4932 generic.go:334] "Generic (PLEG): container finished" podID="743a9d5a-33ac-4937-a081-195105ed16b3" containerID="ada54d78ad5b66f0b05c4a0f171470d8f4fe3a536e318eb231e890b0d3bb4e21" exitCode=0 Feb 18 19:48:29 crc kubenswrapper[4932]: I0218 19:48:29.258735 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5rjw" event={"ID":"743a9d5a-33ac-4937-a081-195105ed16b3","Type":"ContainerDied","Data":"ada54d78ad5b66f0b05c4a0f171470d8f4fe3a536e318eb231e890b0d3bb4e21"} Feb 18 19:48:30 crc kubenswrapper[4932]: I0218 19:48:30.186018 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:30 crc kubenswrapper[4932]: I0218 19:48:30.186639 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:30 crc kubenswrapper[4932]: I0218 19:48:30.254562 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:30 crc kubenswrapper[4932]: I0218 19:48:30.266698 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5rjw" event={"ID":"743a9d5a-33ac-4937-a081-195105ed16b3","Type":"ContainerStarted","Data":"3862bb1200e50c38a03d6ddf1dcd57b2990f737ae34abf11ff424ed05282b9ae"} Feb 18 19:48:30 crc kubenswrapper[4932]: I0218 19:48:30.312844 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q5rjw" podStartSLOduration=2.74194794 podStartE2EDuration="5.312828275s" podCreationTimestamp="2026-02-18 19:48:25 +0000 UTC" firstStartedPulling="2026-02-18 19:48:27.241005827 +0000 UTC m=+870.822960692" lastFinishedPulling="2026-02-18 19:48:29.811886182 +0000 UTC m=+873.393841027" observedRunningTime="2026-02-18 19:48:30.307237997 +0000 UTC m=+873.889192862" watchObservedRunningTime="2026-02-18 19:48:30.312828275 +0000 UTC m=+873.894783130" Feb 18 19:48:30 crc kubenswrapper[4932]: I0218 19:48:30.317059 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:31 crc kubenswrapper[4932]: I0218 19:48:31.457844 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9s8nd"] Feb 18 19:48:32 crc kubenswrapper[4932]: I0218 19:48:32.281451 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9s8nd" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="registry-server" containerID="cri-o://f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880" gracePeriod=2 Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.263730 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.276357 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-utilities\") pod \"ba1a775b-f93a-44fb-8588-9088a479826f\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.276406 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjm7t\" (UniqueName: \"kubernetes.io/projected/ba1a775b-f93a-44fb-8588-9088a479826f-kube-api-access-cjm7t\") pod \"ba1a775b-f93a-44fb-8588-9088a479826f\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.276486 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-catalog-content\") pod \"ba1a775b-f93a-44fb-8588-9088a479826f\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.277289 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-utilities" (OuterVolumeSpecName: "utilities") pod "ba1a775b-f93a-44fb-8588-9088a479826f" (UID: "ba1a775b-f93a-44fb-8588-9088a479826f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.281372 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1a775b-f93a-44fb-8588-9088a479826f-kube-api-access-cjm7t" (OuterVolumeSpecName: "kube-api-access-cjm7t") pod "ba1a775b-f93a-44fb-8588-9088a479826f" (UID: "ba1a775b-f93a-44fb-8588-9088a479826f"). InnerVolumeSpecName "kube-api-access-cjm7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.289254 4932 generic.go:334] "Generic (PLEG): container finished" podID="ba1a775b-f93a-44fb-8588-9088a479826f" containerID="f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880" exitCode=0 Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.289311 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.289304 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s8nd" event={"ID":"ba1a775b-f93a-44fb-8588-9088a479826f","Type":"ContainerDied","Data":"f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880"} Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.289498 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s8nd" event={"ID":"ba1a775b-f93a-44fb-8588-9088a479826f","Type":"ContainerDied","Data":"06075c9ddc5258f34d00a13d1ce1cf80729081c6aedfbee9dcd4bb5fc15000c0"} Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.289529 4932 scope.go:117] "RemoveContainer" containerID="f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.309054 4932 scope.go:117] "RemoveContainer" containerID="adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.328972 4932 scope.go:117] "RemoveContainer" containerID="9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.348332 4932 scope.go:117] "RemoveContainer" containerID="f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880" Feb 18 19:48:33 crc kubenswrapper[4932]: E0218 19:48:33.348780 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880\": container with ID starting with f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880 not found: ID does not exist" containerID="f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.348827 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880"} err="failed to get container status \"f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880\": rpc error: code = NotFound desc = could not find container \"f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880\": container with ID starting with f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880 not found: ID does not exist" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.348856 4932 scope.go:117] "RemoveContainer" containerID="adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6" Feb 18 19:48:33 crc kubenswrapper[4932]: E0218 19:48:33.349163 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6\": container with ID starting with adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6 not found: ID does not exist" containerID="adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.349225 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6"} err="failed to get container status \"adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6\": rpc error: code = NotFound desc = could not find container \"adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6\": container with ID starting with adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6 not found: ID does not exist" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.349251 4932 scope.go:117] "RemoveContainer" containerID="9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290" Feb 18 19:48:33 crc kubenswrapper[4932]: E0218 19:48:33.349578 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290\": container with ID starting with 9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290 not found: ID does not exist" containerID="9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.349614 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290"} err="failed to get container status \"9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290\": rpc error: code = NotFound desc = could not find container \"9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290\": container with ID starting with 9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290 not found: ID does not exist" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.368978 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba1a775b-f93a-44fb-8588-9088a479826f" (UID: "ba1a775b-f93a-44fb-8588-9088a479826f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.377950 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.377975 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjm7t\" (UniqueName: \"kubernetes.io/projected/ba1a775b-f93a-44fb-8588-9088a479826f-kube-api-access-cjm7t\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.377989 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.624335 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9s8nd"] Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.629278 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9s8nd"] Feb 18 19:48:35 crc kubenswrapper[4932]: I0218 19:48:35.193074 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" path="/var/lib/kubelet/pods/ba1a775b-f93a-44fb-8588-9088a479826f/volumes" Feb 18 19:48:35 crc kubenswrapper[4932]: I0218 19:48:35.978374 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:35 crc kubenswrapper[4932]: I0218 19:48:35.978438 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:36 crc kubenswrapper[4932]: I0218 19:48:36.017598 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:36 crc kubenswrapper[4932]: I0218 19:48:36.350430 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:38 crc kubenswrapper[4932]: I0218 19:48:38.657307 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5rjw"] Feb 18 19:48:38 crc kubenswrapper[4932]: I0218 19:48:38.657854 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q5rjw" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="registry-server" containerID="cri-o://3862bb1200e50c38a03d6ddf1dcd57b2990f737ae34abf11ff424ed05282b9ae" gracePeriod=2 Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.336193 4932 generic.go:334] "Generic (PLEG): container finished" podID="743a9d5a-33ac-4937-a081-195105ed16b3" containerID="3862bb1200e50c38a03d6ddf1dcd57b2990f737ae34abf11ff424ed05282b9ae" exitCode=0 Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.336358 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5rjw" event={"ID":"743a9d5a-33ac-4937-a081-195105ed16b3","Type":"ContainerDied","Data":"3862bb1200e50c38a03d6ddf1dcd57b2990f737ae34abf11ff424ed05282b9ae"} Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.608998 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.662724 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjl62\" (UniqueName: \"kubernetes.io/projected/743a9d5a-33ac-4937-a081-195105ed16b3-kube-api-access-pjl62\") pod \"743a9d5a-33ac-4937-a081-195105ed16b3\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.662846 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-utilities\") pod \"743a9d5a-33ac-4937-a081-195105ed16b3\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.662968 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-catalog-content\") pod \"743a9d5a-33ac-4937-a081-195105ed16b3\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.663955 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-utilities" (OuterVolumeSpecName: "utilities") pod "743a9d5a-33ac-4937-a081-195105ed16b3" (UID: "743a9d5a-33ac-4937-a081-195105ed16b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.669547 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743a9d5a-33ac-4937-a081-195105ed16b3-kube-api-access-pjl62" (OuterVolumeSpecName: "kube-api-access-pjl62") pod "743a9d5a-33ac-4937-a081-195105ed16b3" (UID: "743a9d5a-33ac-4937-a081-195105ed16b3"). InnerVolumeSpecName "kube-api-access-pjl62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.696469 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "743a9d5a-33ac-4937-a081-195105ed16b3" (UID: "743a9d5a-33ac-4937-a081-195105ed16b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.764855 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.764949 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjl62\" (UniqueName: \"kubernetes.io/projected/743a9d5a-33ac-4937-a081-195105ed16b3-kube-api-access-pjl62\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.764974 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:40 crc kubenswrapper[4932]: I0218 19:48:40.345032 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5rjw" event={"ID":"743a9d5a-33ac-4937-a081-195105ed16b3","Type":"ContainerDied","Data":"b49be4835d48c0d3c2ee538792e112030676109057e86ba80e039cfaad394592"} Feb 18 19:48:40 crc kubenswrapper[4932]: I0218 19:48:40.345083 4932 scope.go:117] "RemoveContainer" containerID="3862bb1200e50c38a03d6ddf1dcd57b2990f737ae34abf11ff424ed05282b9ae" Feb 18 19:48:40 crc kubenswrapper[4932]: I0218 19:48:40.345109 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:40 crc kubenswrapper[4932]: I0218 19:48:40.365079 4932 scope.go:117] "RemoveContainer" containerID="ada54d78ad5b66f0b05c4a0f171470d8f4fe3a536e318eb231e890b0d3bb4e21" Feb 18 19:48:40 crc kubenswrapper[4932]: I0218 19:48:40.381415 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5rjw"] Feb 18 19:48:40 crc kubenswrapper[4932]: I0218 19:48:40.388474 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5rjw"] Feb 18 19:48:40 crc kubenswrapper[4932]: I0218 19:48:40.402890 4932 scope.go:117] "RemoveContainer" containerID="3604a684daca1b21cf880f0d994d084bf3376d4f9eda7835931e9e96b1c4b9ef" Feb 18 19:48:41 crc kubenswrapper[4932]: I0218 19:48:41.186773 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" path="/var/lib/kubelet/pods/743a9d5a-33ac-4937-a081-195105ed16b3/volumes" Feb 18 19:48:45 crc kubenswrapper[4932]: I0218 19:48:45.893315 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.785514 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r"] Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.785833 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="registry-server" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.785856 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="registry-server" Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.785877 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="extract-utilities" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.785887 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="extract-utilities" Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.785900 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="registry-server" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.785908 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="registry-server" Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.785924 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="extract-content" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.785931 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="extract-content" Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.785943 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="extract-content" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.785951 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="extract-content" Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.785962 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="extract-utilities" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.785970 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="extract-utilities" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.786111 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="registry-server" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.786126 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="registry-server" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.786679 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.788306 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.788343 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zltgr" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.794344 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-d4twn"] Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.800304 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.803474 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.803499 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.803921 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r"] Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.864748 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-bk4kx"] Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865371 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44wm\" (UniqueName: \"kubernetes.io/projected/849240df-e1e2-40a7-8406-b1033e46b15e-kube-api-access-x44wm\") pod \"frr-k8s-webhook-server-78b44bf5bb-7k58r\" (UID: \"849240df-e1e2-40a7-8406-b1033e46b15e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865419 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-conf\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865449 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics-certs\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865560 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865644 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-startup\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865729 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-sockets\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865786 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hqb7\" (UniqueName: \"kubernetes.io/projected/156971d4-9e01-4970-bb94-4511a2c7c94b-kube-api-access-7hqb7\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865828 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/849240df-e1e2-40a7-8406-b1033e46b15e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7k58r\" (UID: \"849240df-e1e2-40a7-8406-b1033e46b15e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865870 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-reloader\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.866775 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bk4kx" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.869841 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.869847 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.870078 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.871898 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-mxgbg" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.886568 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-7pdzl"] Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.887716 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.889819 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.897822 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-7pdzl"] Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967291 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhb6z\" (UniqueName: \"kubernetes.io/projected/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-kube-api-access-vhb6z\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967348 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-startup\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967374 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s897\" (UniqueName: \"kubernetes.io/projected/419fb9f6-a8b4-4b14-bc10-179c9964f712-kube-api-access-9s897\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967418 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-metrics-certs\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967436 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-sockets\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967473 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-cert\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967491 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hqb7\" (UniqueName: \"kubernetes.io/projected/156971d4-9e01-4970-bb94-4511a2c7c94b-kube-api-access-7hqb7\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967514 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/849240df-e1e2-40a7-8406-b1033e46b15e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7k58r\" (UID: \"849240df-e1e2-40a7-8406-b1033e46b15e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967533 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-reloader\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967586 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/419fb9f6-a8b4-4b14-bc10-179c9964f712-metallb-excludel2\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967627 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x44wm\" (UniqueName: \"kubernetes.io/projected/849240df-e1e2-40a7-8406-b1033e46b15e-kube-api-access-x44wm\") pod \"frr-k8s-webhook-server-78b44bf5bb-7k58r\" (UID: \"849240df-e1e2-40a7-8406-b1033e46b15e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967645 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-metrics-certs\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967664 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-conf\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967696 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics-certs\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967713 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967862 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-sockets\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.968023 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.968102 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-reloader\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.968129 4932 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.968142 4932 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.968192 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics-certs podName:156971d4-9e01-4970-bb94-4511a2c7c94b nodeName:}" failed. No retries permitted until 2026-02-18 19:48:47.468163549 +0000 UTC m=+891.050118394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics-certs") pod "frr-k8s-d4twn" (UID: "156971d4-9e01-4970-bb94-4511a2c7c94b") : secret "frr-k8s-certs-secret" not found Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.968218 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/849240df-e1e2-40a7-8406-b1033e46b15e-cert podName:849240df-e1e2-40a7-8406-b1033e46b15e nodeName:}" failed. No retries permitted until 2026-02-18 19:48:47.46820092 +0000 UTC m=+891.050155845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/849240df-e1e2-40a7-8406-b1033e46b15e-cert") pod "frr-k8s-webhook-server-78b44bf5bb-7k58r" (UID: "849240df-e1e2-40a7-8406-b1033e46b15e") : secret "frr-k8s-webhook-server-cert" not found Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.968273 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-conf\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.968274 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-startup\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.989941 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hqb7\" (UniqueName: \"kubernetes.io/projected/156971d4-9e01-4970-bb94-4511a2c7c94b-kube-api-access-7hqb7\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.998372 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x44wm\" (UniqueName: \"kubernetes.io/projected/849240df-e1e2-40a7-8406-b1033e46b15e-kube-api-access-x44wm\") pod \"frr-k8s-webhook-server-78b44bf5bb-7k58r\" (UID: \"849240df-e1e2-40a7-8406-b1033e46b15e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.068343 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-metrics-certs\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.068420 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhb6z\" (UniqueName: \"kubernetes.io/projected/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-kube-api-access-vhb6z\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.068452 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s897\" (UniqueName: \"kubernetes.io/projected/419fb9f6-a8b4-4b14-bc10-179c9964f712-kube-api-access-9s897\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.068477 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-metrics-certs\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.068515 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-cert\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.068554 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.068589 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/419fb9f6-a8b4-4b14-bc10-179c9964f712-metallb-excludel2\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: E0218 19:48:47.068755 4932 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 19:48:47 crc kubenswrapper[4932]: E0218 19:48:47.068838 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist podName:419fb9f6-a8b4-4b14-bc10-179c9964f712 nodeName:}" failed. No retries permitted until 2026-02-18 19:48:47.568814391 +0000 UTC m=+891.150769236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist") pod "speaker-bk4kx" (UID: "419fb9f6-a8b4-4b14-bc10-179c9964f712") : secret "metallb-memberlist" not found Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.069380 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/419fb9f6-a8b4-4b14-bc10-179c9964f712-metallb-excludel2\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.072311 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-metrics-certs\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.073089 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-metrics-certs\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.073244 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-cert\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.084898 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhb6z\" (UniqueName: \"kubernetes.io/projected/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-kube-api-access-vhb6z\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.089146 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s897\" (UniqueName: \"kubernetes.io/projected/419fb9f6-a8b4-4b14-bc10-179c9964f712-kube-api-access-9s897\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.222366 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.473764 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics-certs\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.474128 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/849240df-e1e2-40a7-8406-b1033e46b15e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7k58r\" (UID: \"849240df-e1e2-40a7-8406-b1033e46b15e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.479576 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics-certs\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.490614 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/849240df-e1e2-40a7-8406-b1033e46b15e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7k58r\" (UID: \"849240df-e1e2-40a7-8406-b1033e46b15e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.575051 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: E0218 19:48:47.575271 4932 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 19:48:47 crc kubenswrapper[4932]: E0218 19:48:47.575341 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist podName:419fb9f6-a8b4-4b14-bc10-179c9964f712 nodeName:}" failed. No retries permitted until 2026-02-18 19:48:48.57532344 +0000 UTC m=+892.157278285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist") pod "speaker-bk4kx" (UID: "419fb9f6-a8b4-4b14-bc10-179c9964f712") : secret "metallb-memberlist" not found Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.649744 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-7pdzl"] Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.703516 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.718631 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.136479 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r"] Feb 18 19:48:48 crc kubenswrapper[4932]: W0218 19:48:48.141996 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod849240df_e1e2_40a7_8406_b1033e46b15e.slice/crio-f0d5917f07185cdb5db7fa2607988675626ecff3f87b8999143c395fa6df534d WatchSource:0}: Error finding container f0d5917f07185cdb5db7fa2607988675626ecff3f87b8999143c395fa6df534d: Status 404 returned error can't find the container with id f0d5917f07185cdb5db7fa2607988675626ecff3f87b8999143c395fa6df534d Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.404951 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7pdzl" event={"ID":"7f2268c2-3ba6-4726-82ae-a80c1a5efb85","Type":"ContainerStarted","Data":"57ea6c7e98fe584a818ad3ce671ce1cacbbcb0607184fe71e60f760ae1d64b56"} Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.405974 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7pdzl" event={"ID":"7f2268c2-3ba6-4726-82ae-a80c1a5efb85","Type":"ContainerStarted","Data":"aaa1d35735d5e5f86f9ad528fc9931461f6e201f577b9722f935c79e0ed94193"} Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.406005 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7pdzl" event={"ID":"7f2268c2-3ba6-4726-82ae-a80c1a5efb85","Type":"ContainerStarted","Data":"53c2278a0990ff75b6f7455d0849f02a82e92c18da31bc8814de16e2aa0bc32d"} Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.406033 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.406623 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" event={"ID":"849240df-e1e2-40a7-8406-b1033e46b15e","Type":"ContainerStarted","Data":"f0d5917f07185cdb5db7fa2607988675626ecff3f87b8999143c395fa6df534d"} Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.409266 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerStarted","Data":"37ec7d22c224fbf5ff880780fc8039b3c3b9899f57b4bd1ebcfca3783530792f"} Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.425754 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-7pdzl" podStartSLOduration=2.425730441 podStartE2EDuration="2.425730441s" podCreationTimestamp="2026-02-18 19:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:48:48.421567998 +0000 UTC m=+892.003522853" watchObservedRunningTime="2026-02-18 19:48:48.425730441 +0000 UTC m=+892.007685286" Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.588633 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.596273 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.680970 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bk4kx" Feb 18 19:48:48 crc kubenswrapper[4932]: W0218 19:48:48.711034 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod419fb9f6_a8b4_4b14_bc10_179c9964f712.slice/crio-3f3c5c753a26a99a910513dab4b42f8c2171ae29b726fe74e8c1a72bbeff6199 WatchSource:0}: Error finding container 3f3c5c753a26a99a910513dab4b42f8c2171ae29b726fe74e8c1a72bbeff6199: Status 404 returned error can't find the container with id 3f3c5c753a26a99a910513dab4b42f8c2171ae29b726fe74e8c1a72bbeff6199 Feb 18 19:48:49 crc kubenswrapper[4932]: I0218 19:48:49.418096 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bk4kx" event={"ID":"419fb9f6-a8b4-4b14-bc10-179c9964f712","Type":"ContainerStarted","Data":"f69b70bd25a1f7d1f9c092614156b9da44e2b0a333bfed842b18ff7616fccb6e"} Feb 18 19:48:49 crc kubenswrapper[4932]: I0218 19:48:49.418745 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bk4kx" event={"ID":"419fb9f6-a8b4-4b14-bc10-179c9964f712","Type":"ContainerStarted","Data":"3f3c5c753a26a99a910513dab4b42f8c2171ae29b726fe74e8c1a72bbeff6199"} Feb 18 19:48:50 crc kubenswrapper[4932]: I0218 19:48:50.445534 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bk4kx" event={"ID":"419fb9f6-a8b4-4b14-bc10-179c9964f712","Type":"ContainerStarted","Data":"e404da7e36125783917b375254222b6abe3e9db0c594ff801af908a0776ef417"} Feb 18 19:48:50 crc kubenswrapper[4932]: I0218 19:48:50.465897 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-bk4kx" podStartSLOduration=4.46587694 podStartE2EDuration="4.46587694s" podCreationTimestamp="2026-02-18 19:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:48:50.463100121 +0000 UTC m=+894.045054986" watchObservedRunningTime="2026-02-18 19:48:50.46587694 +0000 UTC m=+894.047831785" Feb 18 19:48:51 crc kubenswrapper[4932]: I0218 19:48:51.457424 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-bk4kx" Feb 18 19:48:55 crc kubenswrapper[4932]: I0218 19:48:55.484811 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" event={"ID":"849240df-e1e2-40a7-8406-b1033e46b15e","Type":"ContainerStarted","Data":"1b2fd6f421466f154317a355a0792cea51973554d96aeb7f9c6648a7cfd53fa2"} Feb 18 19:48:55 crc kubenswrapper[4932]: I0218 19:48:55.484970 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:55 crc kubenswrapper[4932]: I0218 19:48:55.487831 4932 generic.go:334] "Generic (PLEG): container finished" podID="156971d4-9e01-4970-bb94-4511a2c7c94b" containerID="2eaf09ca7b5c78fc0695c6ba59e607c3563bde4ae2e505cea21f3c8dea6c5c04" exitCode=0 Feb 18 19:48:55 crc kubenswrapper[4932]: I0218 19:48:55.487878 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerDied","Data":"2eaf09ca7b5c78fc0695c6ba59e607c3563bde4ae2e505cea21f3c8dea6c5c04"} Feb 18 19:48:55 crc kubenswrapper[4932]: I0218 19:48:55.557069 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" podStartSLOduration=2.753395041 podStartE2EDuration="9.557045593s" podCreationTimestamp="2026-02-18 19:48:46 +0000 UTC" firstStartedPulling="2026-02-18 19:48:48.14548754 +0000 UTC m=+891.727442385" lastFinishedPulling="2026-02-18 19:48:54.949138092 +0000 UTC m=+898.531092937" observedRunningTime="2026-02-18 19:48:55.520643675 +0000 UTC m=+899.102598530" watchObservedRunningTime="2026-02-18 19:48:55.557045593 +0000 UTC m=+899.139000448" Feb 18 19:48:56 crc kubenswrapper[4932]: I0218 19:48:56.495018 4932 generic.go:334] "Generic (PLEG): container finished" podID="156971d4-9e01-4970-bb94-4511a2c7c94b" containerID="f02e212b15f1ced0b4f05a61ccedd52bf25ff21df803d0b005c333e7f46d8a1f" exitCode=0 Feb 18 19:48:56 crc kubenswrapper[4932]: I0218 19:48:56.495095 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerDied","Data":"f02e212b15f1ced0b4f05a61ccedd52bf25ff21df803d0b005c333e7f46d8a1f"} Feb 18 19:48:57 crc kubenswrapper[4932]: I0218 19:48:57.227643 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:57 crc kubenswrapper[4932]: I0218 19:48:57.506140 4932 generic.go:334] "Generic (PLEG): container finished" podID="156971d4-9e01-4970-bb94-4511a2c7c94b" containerID="155f4849c322b8e7601cfb7428eb47b197ac31fcb99541433514e991b1677969" exitCode=0 Feb 18 19:48:57 crc kubenswrapper[4932]: I0218 19:48:57.506230 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerDied","Data":"155f4849c322b8e7601cfb7428eb47b197ac31fcb99541433514e991b1677969"} Feb 18 19:48:57 crc kubenswrapper[4932]: I0218 19:48:57.605979 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:48:57 crc kubenswrapper[4932]: I0218 19:48:57.606022 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.522497 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerStarted","Data":"93ea22518bb669e6f6e7e727f1a9229f5d14fa5f801b0b2352dfda1738534f32"} Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.522838 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.522853 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerStarted","Data":"f487e31b8b6a614802ac81a4c4ef81c0a85ca454bc8d445b3b741833734d677b"} Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.522865 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerStarted","Data":"a852ed33248d7328f7ff0c4c1261a7ab6f80cb93e6f32ec290c33ccf3301013c"} Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.522877 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerStarted","Data":"f89858a65fa93214ccf31d2f903117292f248c60ba453c04b95e122e1a789aa3"} Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.522888 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerStarted","Data":"bba6b26b101a141c6678d8efe6846c0148f4af1f625434728d922e6b803de8ec"} Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.522898 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerStarted","Data":"03778760b4671427fe0bbde1aadd57f1036ae1fc48897092d97d040f2e9db7c0"} Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.553681 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-d4twn" podStartSLOduration=5.47160886 podStartE2EDuration="12.553657617s" podCreationTimestamp="2026-02-18 19:48:46 +0000 UTC" firstStartedPulling="2026-02-18 19:48:47.851007969 +0000 UTC m=+891.432962814" lastFinishedPulling="2026-02-18 19:48:54.933056736 +0000 UTC m=+898.515011571" observedRunningTime="2026-02-18 19:48:58.549148865 +0000 UTC m=+902.131103710" watchObservedRunningTime="2026-02-18 19:48:58.553657617 +0000 UTC m=+902.135612472" Feb 18 19:49:02 crc kubenswrapper[4932]: I0218 19:49:02.719655 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-d4twn" Feb 18 19:49:02 crc kubenswrapper[4932]: I0218 19:49:02.758401 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-d4twn" Feb 18 19:49:07 crc kubenswrapper[4932]: I0218 19:49:07.709375 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:49:07 crc kubenswrapper[4932]: I0218 19:49:07.722368 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-d4twn" Feb 18 19:49:09 crc kubenswrapper[4932]: I0218 19:49:09.082889 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-bk4kx" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.228581 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-lglkz"] Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.231237 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lglkz" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.238461 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tz5r\" (UniqueName: \"kubernetes.io/projected/ad790bf4-8b1b-43a0-b027-64ef1f97688b-kube-api-access-4tz5r\") pod \"openstack-operator-index-lglkz\" (UID: \"ad790bf4-8b1b-43a0-b027-64ef1f97688b\") " pod="openstack-operators/openstack-operator-index-lglkz" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.242727 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.243501 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7jzpj" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.244628 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.260859 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lglkz"] Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.340297 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tz5r\" (UniqueName: \"kubernetes.io/projected/ad790bf4-8b1b-43a0-b027-64ef1f97688b-kube-api-access-4tz5r\") pod \"openstack-operator-index-lglkz\" (UID: \"ad790bf4-8b1b-43a0-b027-64ef1f97688b\") " pod="openstack-operators/openstack-operator-index-lglkz" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.360067 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tz5r\" (UniqueName: \"kubernetes.io/projected/ad790bf4-8b1b-43a0-b027-64ef1f97688b-kube-api-access-4tz5r\") pod \"openstack-operator-index-lglkz\" (UID: \"ad790bf4-8b1b-43a0-b027-64ef1f97688b\") " pod="openstack-operators/openstack-operator-index-lglkz" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.569468 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lglkz" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.791159 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lglkz"] Feb 18 19:49:13 crc kubenswrapper[4932]: I0218 19:49:13.634867 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lglkz" event={"ID":"ad790bf4-8b1b-43a0-b027-64ef1f97688b","Type":"ContainerStarted","Data":"db23829867202d64d82ccd46bc9f4bacf0c6144e4bdd52f1eadd9a9071acff25"} Feb 18 19:49:15 crc kubenswrapper[4932]: I0218 19:49:15.598312 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-lglkz"] Feb 18 19:49:15 crc kubenswrapper[4932]: I0218 19:49:15.653649 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lglkz" event={"ID":"ad790bf4-8b1b-43a0-b027-64ef1f97688b","Type":"ContainerStarted","Data":"ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc"} Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.207027 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-lglkz" podStartSLOduration=2.18618398 podStartE2EDuration="4.206995722s" podCreationTimestamp="2026-02-18 19:49:12 +0000 UTC" firstStartedPulling="2026-02-18 19:49:12.800409899 +0000 UTC m=+916.382364734" lastFinishedPulling="2026-02-18 19:49:14.821221631 +0000 UTC m=+918.403176476" observedRunningTime="2026-02-18 19:49:15.683057284 +0000 UTC m=+919.265012169" watchObservedRunningTime="2026-02-18 19:49:16.206995722 +0000 UTC m=+919.788950667" Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.213261 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hbl5z"] Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.214794 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.229347 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hbl5z"] Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.399881 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brzzd\" (UniqueName: \"kubernetes.io/projected/80acb08c-9e7c-49a6-908f-83d3b958e7b2-kube-api-access-brzzd\") pod \"openstack-operator-index-hbl5z\" (UID: \"80acb08c-9e7c-49a6-908f-83d3b958e7b2\") " pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.501258 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brzzd\" (UniqueName: \"kubernetes.io/projected/80acb08c-9e7c-49a6-908f-83d3b958e7b2-kube-api-access-brzzd\") pod \"openstack-operator-index-hbl5z\" (UID: \"80acb08c-9e7c-49a6-908f-83d3b958e7b2\") " pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.530581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brzzd\" (UniqueName: \"kubernetes.io/projected/80acb08c-9e7c-49a6-908f-83d3b958e7b2-kube-api-access-brzzd\") pod \"openstack-operator-index-hbl5z\" (UID: \"80acb08c-9e7c-49a6-908f-83d3b958e7b2\") " pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.539839 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.661514 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-lglkz" podUID="ad790bf4-8b1b-43a0-b027-64ef1f97688b" containerName="registry-server" containerID="cri-o://ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc" gracePeriod=2 Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.024207 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lglkz" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.031545 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hbl5z"] Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.212216 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tz5r\" (UniqueName: \"kubernetes.io/projected/ad790bf4-8b1b-43a0-b027-64ef1f97688b-kube-api-access-4tz5r\") pod \"ad790bf4-8b1b-43a0-b027-64ef1f97688b\" (UID: \"ad790bf4-8b1b-43a0-b027-64ef1f97688b\") " Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.217709 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad790bf4-8b1b-43a0-b027-64ef1f97688b-kube-api-access-4tz5r" (OuterVolumeSpecName: "kube-api-access-4tz5r") pod "ad790bf4-8b1b-43a0-b027-64ef1f97688b" (UID: "ad790bf4-8b1b-43a0-b027-64ef1f97688b"). InnerVolumeSpecName "kube-api-access-4tz5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.313868 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tz5r\" (UniqueName: \"kubernetes.io/projected/ad790bf4-8b1b-43a0-b027-64ef1f97688b-kube-api-access-4tz5r\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.672084 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hbl5z" event={"ID":"80acb08c-9e7c-49a6-908f-83d3b958e7b2","Type":"ContainerStarted","Data":"922654891b19f144830fd6ae2250c6ada163262b8811bdad5d0544137968f511"} Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.672146 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hbl5z" event={"ID":"80acb08c-9e7c-49a6-908f-83d3b958e7b2","Type":"ContainerStarted","Data":"13c4361fa8e7450a59cec819f105c251cfa919ada17f7c5c3fbadaf179e41f71"} Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.674630 4932 generic.go:334] "Generic (PLEG): container finished" podID="ad790bf4-8b1b-43a0-b027-64ef1f97688b" containerID="ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc" exitCode=0 Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.674721 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lglkz" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.674757 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lglkz" event={"ID":"ad790bf4-8b1b-43a0-b027-64ef1f97688b","Type":"ContainerDied","Data":"ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc"} Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.674827 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lglkz" event={"ID":"ad790bf4-8b1b-43a0-b027-64ef1f97688b","Type":"ContainerDied","Data":"db23829867202d64d82ccd46bc9f4bacf0c6144e4bdd52f1eadd9a9071acff25"} Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.674860 4932 scope.go:117] "RemoveContainer" containerID="ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.697335 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-hbl5z" podStartSLOduration=1.6432701079999998 podStartE2EDuration="1.697319861s" podCreationTimestamp="2026-02-18 19:49:16 +0000 UTC" firstStartedPulling="2026-02-18 19:49:17.02531358 +0000 UTC m=+920.607268425" lastFinishedPulling="2026-02-18 19:49:17.079363323 +0000 UTC m=+920.661318178" observedRunningTime="2026-02-18 19:49:17.695488096 +0000 UTC m=+921.277442971" watchObservedRunningTime="2026-02-18 19:49:17.697319861 +0000 UTC m=+921.279274716" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.701783 4932 scope.go:117] "RemoveContainer" containerID="ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc" Feb 18 19:49:17 crc kubenswrapper[4932]: E0218 19:49:17.702431 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc\": container with ID starting with ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc not found: ID does not exist" containerID="ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.702473 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc"} err="failed to get container status \"ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc\": rpc error: code = NotFound desc = could not find container \"ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc\": container with ID starting with ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc not found: ID does not exist" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.721442 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-lglkz"] Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.729400 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-lglkz"] Feb 18 19:49:19 crc kubenswrapper[4932]: I0218 19:49:19.193558 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad790bf4-8b1b-43a0-b027-64ef1f97688b" path="/var/lib/kubelet/pods/ad790bf4-8b1b-43a0-b027-64ef1f97688b/volumes" Feb 18 19:49:26 crc kubenswrapper[4932]: I0218 19:49:26.540880 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:26 crc kubenswrapper[4932]: I0218 19:49:26.541545 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:26 crc kubenswrapper[4932]: I0218 19:49:26.586079 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:26 crc kubenswrapper[4932]: I0218 19:49:26.774834 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:27 crc kubenswrapper[4932]: I0218 19:49:27.605813 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:49:27 crc kubenswrapper[4932]: I0218 19:49:27.606876 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.315264 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2"] Feb 18 19:49:29 crc kubenswrapper[4932]: E0218 19:49:29.315694 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad790bf4-8b1b-43a0-b027-64ef1f97688b" containerName="registry-server" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.315713 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad790bf4-8b1b-43a0-b027-64ef1f97688b" containerName="registry-server" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.315982 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad790bf4-8b1b-43a0-b027-64ef1f97688b" containerName="registry-server" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.317826 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.324106 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-qsshk" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.332727 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2"] Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.483491 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-util\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.483711 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-bundle\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.483836 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtlsz\" (UniqueName: \"kubernetes.io/projected/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-kube-api-access-rtlsz\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.586477 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-bundle\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.587974 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-bundle\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.587986 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtlsz\" (UniqueName: \"kubernetes.io/projected/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-kube-api-access-rtlsz\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.588274 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-util\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.588930 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-util\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.625602 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtlsz\" (UniqueName: \"kubernetes.io/projected/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-kube-api-access-rtlsz\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.662950 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:30 crc kubenswrapper[4932]: I0218 19:49:30.120948 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2"] Feb 18 19:49:30 crc kubenswrapper[4932]: I0218 19:49:30.785052 4932 generic.go:334] "Generic (PLEG): container finished" podID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerID="02fea0e719048f7f6b29ca81cc6bf4132bc9ef6b1f16295e859f28e8b18e1563" exitCode=0 Feb 18 19:49:30 crc kubenswrapper[4932]: I0218 19:49:30.785137 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" event={"ID":"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762","Type":"ContainerDied","Data":"02fea0e719048f7f6b29ca81cc6bf4132bc9ef6b1f16295e859f28e8b18e1563"} Feb 18 19:49:30 crc kubenswrapper[4932]: I0218 19:49:30.785215 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" event={"ID":"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762","Type":"ContainerStarted","Data":"16d0afd0735b29154ca31e96cfd81cb1ab5e28772ab683f4f174d2e10050e07b"} Feb 18 19:49:31 crc kubenswrapper[4932]: I0218 19:49:31.797483 4932 generic.go:334] "Generic (PLEG): container finished" podID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerID="2da580f34eeec2c50f4f1bb64ba727ae9b68d89234d3f2d58a2e49e7b095b8e6" exitCode=0 Feb 18 19:49:31 crc kubenswrapper[4932]: I0218 19:49:31.797968 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" event={"ID":"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762","Type":"ContainerDied","Data":"2da580f34eeec2c50f4f1bb64ba727ae9b68d89234d3f2d58a2e49e7b095b8e6"} Feb 18 19:49:32 crc kubenswrapper[4932]: I0218 19:49:32.808630 4932 generic.go:334] "Generic (PLEG): container finished" podID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerID="cf995c532a3a5612c7ab11dad314f25deca245da27ce35c1fd72ae7f294b024f" exitCode=0 Feb 18 19:49:32 crc kubenswrapper[4932]: I0218 19:49:32.808750 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" event={"ID":"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762","Type":"ContainerDied","Data":"cf995c532a3a5612c7ab11dad314f25deca245da27ce35c1fd72ae7f294b024f"} Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.166314 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.356027 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-util\") pod \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.356552 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-bundle\") pod \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.356693 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtlsz\" (UniqueName: \"kubernetes.io/projected/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-kube-api-access-rtlsz\") pod \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.357359 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-bundle" (OuterVolumeSpecName: "bundle") pod "ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" (UID: "ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.363147 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-kube-api-access-rtlsz" (OuterVolumeSpecName: "kube-api-access-rtlsz") pod "ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" (UID: "ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762"). InnerVolumeSpecName "kube-api-access-rtlsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.381525 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-util" (OuterVolumeSpecName: "util") pod "ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" (UID: "ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.458794 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtlsz\" (UniqueName: \"kubernetes.io/projected/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-kube-api-access-rtlsz\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.458838 4932 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.458855 4932 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.829821 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" event={"ID":"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762","Type":"ContainerDied","Data":"16d0afd0735b29154ca31e96cfd81cb1ab5e28772ab683f4f174d2e10050e07b"} Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.829886 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d0afd0735b29154ca31e96cfd81cb1ab5e28772ab683f4f174d2e10050e07b" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.829971 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.505167 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr"] Feb 18 19:49:36 crc kubenswrapper[4932]: E0218 19:49:36.505545 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerName="util" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.505565 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerName="util" Feb 18 19:49:36 crc kubenswrapper[4932]: E0218 19:49:36.505598 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerName="extract" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.505609 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerName="extract" Feb 18 19:49:36 crc kubenswrapper[4932]: E0218 19:49:36.505626 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerName="pull" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.505638 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerName="pull" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.505837 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerName="extract" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.506526 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.519019 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-zd4xf" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.529153 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr"] Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.686733 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pmft\" (UniqueName: \"kubernetes.io/projected/ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce-kube-api-access-2pmft\") pod \"openstack-operator-controller-init-54f996c4d6-kzsqr\" (UID: \"ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce\") " pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.789041 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pmft\" (UniqueName: \"kubernetes.io/projected/ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce-kube-api-access-2pmft\") pod \"openstack-operator-controller-init-54f996c4d6-kzsqr\" (UID: \"ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce\") " pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.817485 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pmft\" (UniqueName: \"kubernetes.io/projected/ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce-kube-api-access-2pmft\") pod \"openstack-operator-controller-init-54f996c4d6-kzsqr\" (UID: \"ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce\") " pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.822058 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" Feb 18 19:49:37 crc kubenswrapper[4932]: I0218 19:49:37.330565 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr"] Feb 18 19:49:37 crc kubenswrapper[4932]: I0218 19:49:37.851924 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" event={"ID":"ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce","Type":"ContainerStarted","Data":"f4287e62498162809da5a27b02dc8f14aab62b8c45c4680dffc3d74a395b7405"} Feb 18 19:49:41 crc kubenswrapper[4932]: I0218 19:49:41.876528 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" event={"ID":"ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce","Type":"ContainerStarted","Data":"5ade3468ee541d4f66060329e19913f9fb5c73b097fadc0a004c5f9581e7b18f"} Feb 18 19:49:41 crc kubenswrapper[4932]: I0218 19:49:41.877037 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" Feb 18 19:49:41 crc kubenswrapper[4932]: I0218 19:49:41.907382 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" podStartSLOduration=2.253425285 podStartE2EDuration="5.907349077s" podCreationTimestamp="2026-02-18 19:49:36 +0000 UTC" firstStartedPulling="2026-02-18 19:49:37.337853398 +0000 UTC m=+940.919808253" lastFinishedPulling="2026-02-18 19:49:40.9917772 +0000 UTC m=+944.573732045" observedRunningTime="2026-02-18 19:49:41.9034371 +0000 UTC m=+945.485391955" watchObservedRunningTime="2026-02-18 19:49:41.907349077 +0000 UTC m=+945.489303932" Feb 18 19:49:46 crc kubenswrapper[4932]: I0218 19:49:46.824869 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" Feb 18 19:49:57 crc kubenswrapper[4932]: I0218 19:49:57.606426 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:49:57 crc kubenswrapper[4932]: I0218 19:49:57.606861 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:49:57 crc kubenswrapper[4932]: I0218 19:49:57.606908 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:49:57 crc kubenswrapper[4932]: I0218 19:49:57.607485 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0796b82991176676a1533452d61ed93202733b7f85192cab295504d343f7c992"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:49:57 crc kubenswrapper[4932]: I0218 19:49:57.607550 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://0796b82991176676a1533452d61ed93202733b7f85192cab295504d343f7c992" gracePeriod=600 Feb 18 19:49:58 crc kubenswrapper[4932]: I0218 19:49:58.009487 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="0796b82991176676a1533452d61ed93202733b7f85192cab295504d343f7c992" exitCode=0 Feb 18 19:49:58 crc kubenswrapper[4932]: I0218 19:49:58.009549 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"0796b82991176676a1533452d61ed93202733b7f85192cab295504d343f7c992"} Feb 18 19:49:58 crc kubenswrapper[4932]: I0218 19:49:58.009605 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"435f6d4431c63fe1b1d0a709b03d86681659a5d37fb618d6ab36ba1010fce349"} Feb 18 19:49:58 crc kubenswrapper[4932]: I0218 19:49:58.009642 4932 scope.go:117] "RemoveContainer" containerID="f3b543e6ec63bdf78c858f95870e024438d65d986dd0f72b674fc74756af06be" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.772706 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.778545 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.782149 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-bz8sq" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.788633 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.789543 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.794835 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-psps4" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.794902 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.806913 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.828045 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.828761 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.836586 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-s8t4r" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.843101 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.861231 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.862284 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.877257 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-r9dcb" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.916860 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.917768 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.925765 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-68sgs" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.926567 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.927755 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxqlp\" (UniqueName: \"kubernetes.io/projected/33f4dcd6-0eea-40f3-9968-458594d82013-kube-api-access-dxqlp\") pod \"designate-operator-controller-manager-55cc45767f-mp2bb\" (UID: \"33f4dcd6-0eea-40f3-9968-458594d82013\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.927790 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktdsl\" (UniqueName: \"kubernetes.io/projected/59af7bc1-7774-4102-ae6c-2d7f820d3b93-kube-api-access-ktdsl\") pod \"cinder-operator-controller-manager-57746b5ff9-56fbf\" (UID: \"59af7bc1-7774-4102-ae6c-2d7f820d3b93\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.927822 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwztk\" (UniqueName: \"kubernetes.io/projected/4f286c1e-d207-47a0-86be-6711856071a7-kube-api-access-lwztk\") pod \"glance-operator-controller-manager-68c6d499cb-b46xh\" (UID: \"4f286c1e-d207-47a0-86be-6711856071a7\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.927844 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blhxn\" (UniqueName: \"kubernetes.io/projected/d03b5e78-a45c-49aa-8915-336be03c8c94-kube-api-access-blhxn\") pod \"barbican-operator-controller-manager-c4b7d6946-clwts\" (UID: \"d03b5e78-a45c-49aa-8915-336be03c8c94\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.950845 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.955227 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.956235 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.959026 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zspnl" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.968342 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.979219 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.980067 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.985581 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.985634 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-f6sbm" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.000314 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.023463 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.024234 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.028822 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.028884 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-ch744" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.029969 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxqlp\" (UniqueName: \"kubernetes.io/projected/33f4dcd6-0eea-40f3-9968-458594d82013-kube-api-access-dxqlp\") pod \"designate-operator-controller-manager-55cc45767f-mp2bb\" (UID: \"33f4dcd6-0eea-40f3-9968-458594d82013\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.030096 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktdsl\" (UniqueName: \"kubernetes.io/projected/59af7bc1-7774-4102-ae6c-2d7f820d3b93-kube-api-access-ktdsl\") pod \"cinder-operator-controller-manager-57746b5ff9-56fbf\" (UID: \"59af7bc1-7774-4102-ae6c-2d7f820d3b93\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.030204 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5469v\" (UniqueName: \"kubernetes.io/projected/a0fe77a1-c4a7-422f-b7c2-3062c2af1393-kube-api-access-5469v\") pod \"heat-operator-controller-manager-9595d6797-7ssxs\" (UID: \"a0fe77a1-c4a7-422f-b7c2-3062c2af1393\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.030295 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwztk\" (UniqueName: \"kubernetes.io/projected/4f286c1e-d207-47a0-86be-6711856071a7-kube-api-access-lwztk\") pod \"glance-operator-controller-manager-68c6d499cb-b46xh\" (UID: \"4f286c1e-d207-47a0-86be-6711856071a7\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.030392 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blhxn\" (UniqueName: \"kubernetes.io/projected/d03b5e78-a45c-49aa-8915-336be03c8c94-kube-api-access-blhxn\") pod \"barbican-operator-controller-manager-c4b7d6946-clwts\" (UID: \"d03b5e78-a45c-49aa-8915-336be03c8c94\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.061952 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwztk\" (UniqueName: \"kubernetes.io/projected/4f286c1e-d207-47a0-86be-6711856071a7-kube-api-access-lwztk\") pod \"glance-operator-controller-manager-68c6d499cb-b46xh\" (UID: \"4f286c1e-d207-47a0-86be-6711856071a7\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.063095 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blhxn\" (UniqueName: \"kubernetes.io/projected/d03b5e78-a45c-49aa-8915-336be03c8c94-kube-api-access-blhxn\") pod \"barbican-operator-controller-manager-c4b7d6946-clwts\" (UID: \"d03b5e78-a45c-49aa-8915-336be03c8c94\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.068585 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktdsl\" (UniqueName: \"kubernetes.io/projected/59af7bc1-7774-4102-ae6c-2d7f820d3b93-kube-api-access-ktdsl\") pod \"cinder-operator-controller-manager-57746b5ff9-56fbf\" (UID: \"59af7bc1-7774-4102-ae6c-2d7f820d3b93\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.089627 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxqlp\" (UniqueName: \"kubernetes.io/projected/33f4dcd6-0eea-40f3-9968-458594d82013-kube-api-access-dxqlp\") pod \"designate-operator-controller-manager-55cc45767f-mp2bb\" (UID: \"33f4dcd6-0eea-40f3-9968-458594d82013\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.108423 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.119696 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.138833 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgvpc\" (UniqueName: \"kubernetes.io/projected/376e77a5-0e6f-4999-a037-96154984442f-kube-api-access-sgvpc\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.139919 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxs72\" (UniqueName: \"kubernetes.io/projected/3967efad-3234-435e-b755-f684ffd74918-kube-api-access-vxs72\") pod \"horizon-operator-controller-manager-54fb488b88-4m7xr\" (UID: \"3967efad-3234-435e-b755-f684ffd74918\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.140810 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24slc\" (UniqueName: \"kubernetes.io/projected/6b690eeb-2e37-49d8-9f44-9ca086aa2f00-kube-api-access-24slc\") pod \"ironic-operator-controller-manager-6494cdbf8f-qqxpn\" (UID: \"6b690eeb-2e37-49d8-9f44-9ca086aa2f00\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.140931 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5469v\" (UniqueName: \"kubernetes.io/projected/a0fe77a1-c4a7-422f-b7c2-3062c2af1393-kube-api-access-5469v\") pod \"heat-operator-controller-manager-9595d6797-7ssxs\" (UID: \"a0fe77a1-c4a7-422f-b7c2-3062c2af1393\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.141079 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.141877 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.181878 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.182242 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.188332 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.201035 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5469v\" (UniqueName: \"kubernetes.io/projected/a0fe77a1-c4a7-422f-b7c2-3062c2af1393-kube-api-access-5469v\") pod \"heat-operator-controller-manager-9595d6797-7ssxs\" (UID: \"a0fe77a1-c4a7-422f-b7c2-3062c2af1393\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.232118 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-x948b" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.233434 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.242555 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.242655 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgvpc\" (UniqueName: \"kubernetes.io/projected/376e77a5-0e6f-4999-a037-96154984442f-kube-api-access-sgvpc\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.242728 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxs72\" (UniqueName: \"kubernetes.io/projected/3967efad-3234-435e-b755-f684ffd74918-kube-api-access-vxs72\") pod \"horizon-operator-controller-manager-54fb488b88-4m7xr\" (UID: \"3967efad-3234-435e-b755-f684ffd74918\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.242758 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv8bm\" (UniqueName: \"kubernetes.io/projected/ffff0e6b-64e2-499f-8296-f374c5d62450-kube-api-access-wv8bm\") pod \"manila-operator-controller-manager-96fff9cb8-brmw7\" (UID: \"ffff0e6b-64e2-499f-8296-f374c5d62450\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.242787 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24slc\" (UniqueName: \"kubernetes.io/projected/6b690eeb-2e37-49d8-9f44-9ca086aa2f00-kube-api-access-24slc\") pod \"ironic-operator-controller-manager-6494cdbf8f-qqxpn\" (UID: \"6b690eeb-2e37-49d8-9f44-9ca086aa2f00\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" Feb 18 19:50:07 crc kubenswrapper[4932]: E0218 19:50:07.243411 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:07 crc kubenswrapper[4932]: E0218 19:50:07.243461 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert podName:376e77a5-0e6f-4999-a037-96154984442f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:07.743445303 +0000 UTC m=+971.325400148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert") pod "infra-operator-controller-manager-66d6b5f488-dv4j7" (UID: "376e77a5-0e6f-4999-a037-96154984442f") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.278847 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxs72\" (UniqueName: \"kubernetes.io/projected/3967efad-3234-435e-b755-f684ffd74918-kube-api-access-vxs72\") pod \"horizon-operator-controller-manager-54fb488b88-4m7xr\" (UID: \"3967efad-3234-435e-b755-f684ffd74918\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.279685 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24slc\" (UniqueName: \"kubernetes.io/projected/6b690eeb-2e37-49d8-9f44-9ca086aa2f00-kube-api-access-24slc\") pod \"ironic-operator-controller-manager-6494cdbf8f-qqxpn\" (UID: \"6b690eeb-2e37-49d8-9f44-9ca086aa2f00\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.284048 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgvpc\" (UniqueName: \"kubernetes.io/projected/376e77a5-0e6f-4999-a037-96154984442f-kube-api-access-sgvpc\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.284433 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.285186 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.285203 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.285338 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.306403 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-v6p68" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.307667 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.308440 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.318316 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-9wg25" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.318951 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.319714 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.325009 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2kflp" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.336641 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.337548 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.342229 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.343835 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv8bm\" (UniqueName: \"kubernetes.io/projected/ffff0e6b-64e2-499f-8296-f374c5d62450-kube-api-access-wv8bm\") pod \"manila-operator-controller-manager-96fff9cb8-brmw7\" (UID: \"ffff0e6b-64e2-499f-8296-f374c5d62450\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.349816 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.353841 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-fb6fl" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.369733 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.377801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv8bm\" (UniqueName: \"kubernetes.io/projected/ffff0e6b-64e2-499f-8296-f374c5d62450-kube-api-access-wv8bm\") pod \"manila-operator-controller-manager-96fff9cb8-brmw7\" (UID: \"ffff0e6b-64e2-499f-8296-f374c5d62450\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.385814 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.387015 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.389066 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-2m5xs" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.419130 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.419892 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.442903 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.443891 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.445678 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkvb9\" (UniqueName: \"kubernetes.io/projected/9647c082-6b36-4f38-b1fb-663f095997e9-kube-api-access-lkvb9\") pod \"keystone-operator-controller-manager-6c78d668d5-m86tn\" (UID: \"9647c082-6b36-4f38-b1fb-663f095997e9\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.445740 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvp48\" (UniqueName: \"kubernetes.io/projected/4eb6df58-4273-41ac-8d6d-34d04a30adef-kube-api-access-qvp48\") pod \"neutron-operator-controller-manager-54967dbbdf-4hrhw\" (UID: \"4eb6df58-4273-41ac-8d6d-34d04a30adef\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.445760 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94d6r\" (UniqueName: \"kubernetes.io/projected/4f1057b4-de48-4123-986c-795f9957899a-kube-api-access-94d6r\") pod \"nova-operator-controller-manager-5ddd85db87-wt2rd\" (UID: \"4f1057b4-de48-4123-986c-795f9957899a\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.445787 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvz89\" (UniqueName: \"kubernetes.io/projected/27b507e8-a4b3-49cb-bef2-85a319a10257-kube-api-access-nvz89\") pod \"mariadb-operator-controller-manager-66997756f6-s8b9p\" (UID: \"27b507e8-a4b3-49cb-bef2-85a319a10257\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.446380 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.447433 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-bj7gq" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.460004 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.460949 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.462834 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.463627 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.466501 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7nmpc" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.466802 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-9l9lk" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.488014 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.488946 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.491659 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-wjht2" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.512596 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.549420 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552325 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552364 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq9mv\" (UniqueName: \"kubernetes.io/projected/d09a2660-c1e2-4305-b601-f9fb39b12ed9-kube-api-access-gq9mv\") pod \"placement-operator-controller-manager-57bd55f9b7-t8b9r\" (UID: \"d09a2660-c1e2-4305-b601-f9fb39b12ed9\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552388 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wnsq\" (UniqueName: \"kubernetes.io/projected/191cd867-8aef-41cd-ae38-18b08d073f5d-kube-api-access-5wnsq\") pod \"ovn-operator-controller-manager-85c99d655-6k58x\" (UID: \"191cd867-8aef-41cd-ae38-18b08d073f5d\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552412 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htbrr\" (UniqueName: \"kubernetes.io/projected/73445d4e-349f-4e37-a75d-44949a14db73-kube-api-access-htbrr\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552443 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x9gd\" (UniqueName: \"kubernetes.io/projected/9340dde2-09ac-43c0-ab0e-b2ce8ed53de0-kube-api-access-4x9gd\") pod \"swift-operator-controller-manager-79558bbfbf-h2gnn\" (UID: \"9340dde2-09ac-43c0-ab0e-b2ce8ed53de0\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552473 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s864\" (UniqueName: \"kubernetes.io/projected/d0fc20a0-4c08-4552-be44-459c503d50c3-kube-api-access-8s864\") pod \"octavia-operator-controller-manager-745bbbd77b-jpncb\" (UID: \"d0fc20a0-4c08-4552-be44-459c503d50c3\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552502 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkvb9\" (UniqueName: \"kubernetes.io/projected/9647c082-6b36-4f38-b1fb-663f095997e9-kube-api-access-lkvb9\") pod \"keystone-operator-controller-manager-6c78d668d5-m86tn\" (UID: \"9647c082-6b36-4f38-b1fb-663f095997e9\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvp48\" (UniqueName: \"kubernetes.io/projected/4eb6df58-4273-41ac-8d6d-34d04a30adef-kube-api-access-qvp48\") pod \"neutron-operator-controller-manager-54967dbbdf-4hrhw\" (UID: \"4eb6df58-4273-41ac-8d6d-34d04a30adef\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552584 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94d6r\" (UniqueName: \"kubernetes.io/projected/4f1057b4-de48-4123-986c-795f9957899a-kube-api-access-94d6r\") pod \"nova-operator-controller-manager-5ddd85db87-wt2rd\" (UID: \"4f1057b4-de48-4123-986c-795f9957899a\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552612 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvz89\" (UniqueName: \"kubernetes.io/projected/27b507e8-a4b3-49cb-bef2-85a319a10257-kube-api-access-nvz89\") pod \"mariadb-operator-controller-manager-66997756f6-s8b9p\" (UID: \"27b507e8-a4b3-49cb-bef2-85a319a10257\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.562133 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.570822 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.572001 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvp48\" (UniqueName: \"kubernetes.io/projected/4eb6df58-4273-41ac-8d6d-34d04a30adef-kube-api-access-qvp48\") pod \"neutron-operator-controller-manager-54967dbbdf-4hrhw\" (UID: \"4eb6df58-4273-41ac-8d6d-34d04a30adef\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.577460 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.578646 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94d6r\" (UniqueName: \"kubernetes.io/projected/4f1057b4-de48-4123-986c-795f9957899a-kube-api-access-94d6r\") pod \"nova-operator-controller-manager-5ddd85db87-wt2rd\" (UID: \"4f1057b4-de48-4123-986c-795f9957899a\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.582247 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.586537 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkvb9\" (UniqueName: \"kubernetes.io/projected/9647c082-6b36-4f38-b1fb-663f095997e9-kube-api-access-lkvb9\") pod \"keystone-operator-controller-manager-6c78d668d5-m86tn\" (UID: \"9647c082-6b36-4f38-b1fb-663f095997e9\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.587772 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvz89\" (UniqueName: \"kubernetes.io/projected/27b507e8-a4b3-49cb-bef2-85a319a10257-kube-api-access-nvz89\") pod \"mariadb-operator-controller-manager-66997756f6-s8b9p\" (UID: \"27b507e8-a4b3-49cb-bef2-85a319a10257\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.589209 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.599707 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.601345 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.603461 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8lm5t" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.610304 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.611228 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.623792 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-449jw" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.632101 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.640778 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.641649 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.646959 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-ghj4x" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653761 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653805 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49ftt\" (UniqueName: \"kubernetes.io/projected/6565f17b-d11e-4f28-bc32-f6e43062f81b-kube-api-access-49ftt\") pod \"test-operator-controller-manager-8467ccb4c8-ts8dz\" (UID: \"6565f17b-d11e-4f28-bc32-f6e43062f81b\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653835 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq9mv\" (UniqueName: \"kubernetes.io/projected/d09a2660-c1e2-4305-b601-f9fb39b12ed9-kube-api-access-gq9mv\") pod \"placement-operator-controller-manager-57bd55f9b7-t8b9r\" (UID: \"d09a2660-c1e2-4305-b601-f9fb39b12ed9\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653861 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wnsq\" (UniqueName: \"kubernetes.io/projected/191cd867-8aef-41cd-ae38-18b08d073f5d-kube-api-access-5wnsq\") pod \"ovn-operator-controller-manager-85c99d655-6k58x\" (UID: \"191cd867-8aef-41cd-ae38-18b08d073f5d\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653883 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htbrr\" (UniqueName: \"kubernetes.io/projected/73445d4e-349f-4e37-a75d-44949a14db73-kube-api-access-htbrr\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653916 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x9gd\" (UniqueName: \"kubernetes.io/projected/9340dde2-09ac-43c0-ab0e-b2ce8ed53de0-kube-api-access-4x9gd\") pod \"swift-operator-controller-manager-79558bbfbf-h2gnn\" (UID: \"9340dde2-09ac-43c0-ab0e-b2ce8ed53de0\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653936 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s864\" (UniqueName: \"kubernetes.io/projected/d0fc20a0-4c08-4552-be44-459c503d50c3-kube-api-access-8s864\") pod \"octavia-operator-controller-manager-745bbbd77b-jpncb\" (UID: \"d0fc20a0-4c08-4552-be44-459c503d50c3\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653988 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75kcj\" (UniqueName: \"kubernetes.io/projected/9f1309cd-f84d-48a6-a8bc-fd4f70307c12-kube-api-access-75kcj\") pod \"watcher-operator-controller-manager-7fcbb7ddf5-xlhwm\" (UID: \"9f1309cd-f84d-48a6-a8bc-fd4f70307c12\") " pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.654017 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8dcb\" (UniqueName: \"kubernetes.io/projected/52b91f42-32e6-4e15-887f-56098da3900b-kube-api-access-z8dcb\") pod \"telemetry-operator-controller-manager-56dc67d744-rw4dl\" (UID: \"52b91f42-32e6-4e15-887f-56098da3900b\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" Feb 18 19:50:07 crc kubenswrapper[4932]: E0218 19:50:07.654123 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:07 crc kubenswrapper[4932]: E0218 19:50:07.654157 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert podName:73445d4e-349f-4e37-a75d-44949a14db73 nodeName:}" failed. No retries permitted until 2026-02-18 19:50:08.154144811 +0000 UTC m=+971.736099656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" (UID: "73445d4e-349f-4e37-a75d-44949a14db73") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.674410 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.683750 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htbrr\" (UniqueName: \"kubernetes.io/projected/73445d4e-349f-4e37-a75d-44949a14db73-kube-api-access-htbrr\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.687782 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wnsq\" (UniqueName: \"kubernetes.io/projected/191cd867-8aef-41cd-ae38-18b08d073f5d-kube-api-access-5wnsq\") pod \"ovn-operator-controller-manager-85c99d655-6k58x\" (UID: \"191cd867-8aef-41cd-ae38-18b08d073f5d\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.691484 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.698658 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.699219 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq9mv\" (UniqueName: \"kubernetes.io/projected/d09a2660-c1e2-4305-b601-f9fb39b12ed9-kube-api-access-gq9mv\") pod \"placement-operator-controller-manager-57bd55f9b7-t8b9r\" (UID: \"d09a2660-c1e2-4305-b601-f9fb39b12ed9\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.699877 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x9gd\" (UniqueName: \"kubernetes.io/projected/9340dde2-09ac-43c0-ab0e-b2ce8ed53de0-kube-api-access-4x9gd\") pod \"swift-operator-controller-manager-79558bbfbf-h2gnn\" (UID: \"9340dde2-09ac-43c0-ab0e-b2ce8ed53de0\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.701649 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s864\" (UniqueName: \"kubernetes.io/projected/d0fc20a0-4c08-4552-be44-459c503d50c3-kube-api-access-8s864\") pod \"octavia-operator-controller-manager-745bbbd77b-jpncb\" (UID: \"d0fc20a0-4c08-4552-be44-459c503d50c3\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.731620 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.732355 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.733060 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.756716 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.758181 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75kcj\" (UniqueName: \"kubernetes.io/projected/9f1309cd-f84d-48a6-a8bc-fd4f70307c12-kube-api-access-75kcj\") pod \"watcher-operator-controller-manager-7fcbb7ddf5-xlhwm\" (UID: \"9f1309cd-f84d-48a6-a8bc-fd4f70307c12\") " pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.758226 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8dcb\" (UniqueName: \"kubernetes.io/projected/52b91f42-32e6-4e15-887f-56098da3900b-kube-api-access-z8dcb\") pod \"telemetry-operator-controller-manager-56dc67d744-rw4dl\" (UID: \"52b91f42-32e6-4e15-887f-56098da3900b\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.758271 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49ftt\" (UniqueName: \"kubernetes.io/projected/6565f17b-d11e-4f28-bc32-f6e43062f81b-kube-api-access-49ftt\") pod \"test-operator-controller-manager-8467ccb4c8-ts8dz\" (UID: \"6565f17b-d11e-4f28-bc32-f6e43062f81b\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" Feb 18 19:50:07 crc kubenswrapper[4932]: E0218 19:50:07.758708 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:07 crc kubenswrapper[4932]: E0218 19:50:07.759501 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert podName:376e77a5-0e6f-4999-a037-96154984442f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:08.759272132 +0000 UTC m=+972.341226987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert") pod "infra-operator-controller-manager-66d6b5f488-dv4j7" (UID: "376e77a5-0e6f-4999-a037-96154984442f") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.781596 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8dcb\" (UniqueName: \"kubernetes.io/projected/52b91f42-32e6-4e15-887f-56098da3900b-kube-api-access-z8dcb\") pod \"telemetry-operator-controller-manager-56dc67d744-rw4dl\" (UID: \"52b91f42-32e6-4e15-887f-56098da3900b\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.782019 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75kcj\" (UniqueName: \"kubernetes.io/projected/9f1309cd-f84d-48a6-a8bc-fd4f70307c12-kube-api-access-75kcj\") pod \"watcher-operator-controller-manager-7fcbb7ddf5-xlhwm\" (UID: \"9f1309cd-f84d-48a6-a8bc-fd4f70307c12\") " pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.788554 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49ftt\" (UniqueName: \"kubernetes.io/projected/6565f17b-d11e-4f28-bc32-f6e43062f81b-kube-api-access-49ftt\") pod \"test-operator-controller-manager-8467ccb4c8-ts8dz\" (UID: \"6565f17b-d11e-4f28-bc32-f6e43062f81b\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.799791 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.806838 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.808147 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.810266 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.812187 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.813167 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-kr2tb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.813342 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.825115 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.836144 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.840871 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.842501 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.855680 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zslpg" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.872096 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.903026 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.917555 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.941252 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.947935 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.956038 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.961838 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.962031 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.962070 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpmpb\" (UniqueName: \"kubernetes.io/projected/7d117b07-cdb8-4d98-bd18-87d6511259af-kube-api-access-zpmpb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-d5brc\" (UID: \"7d117b07-cdb8-4d98-bd18-87d6511259af\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.962121 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvjj4\" (UniqueName: \"kubernetes.io/projected/6545794f-bb0e-4cb6-848b-436201e3af4f-kube-api-access-bvjj4\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.967432 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.969805 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.979824 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.048654 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f286c1e_d207_47a0_86be_6711856071a7.slice/crio-507836a98e58d8390eceb759c7e3f4c0a437dc3549a4a0456b2b501ef58aed22 WatchSource:0}: Error finding container 507836a98e58d8390eceb759c7e3f4c0a437dc3549a4a0456b2b501ef58aed22: Status 404 returned error can't find the container with id 507836a98e58d8390eceb759c7e3f4c0a437dc3549a4a0456b2b501ef58aed22 Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.050476 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33f4dcd6_0eea_40f3_9968_458594d82013.slice/crio-e7ea7aa53bb3d84ac557d012eb1963c6b521ba0fec1e0f6076fd51e578b2e2a6 WatchSource:0}: Error finding container e7ea7aa53bb3d84ac557d012eb1963c6b521ba0fec1e0f6076fd51e578b2e2a6: Status 404 returned error can't find the container with id e7ea7aa53bb3d84ac557d012eb1963c6b521ba0fec1e0f6076fd51e578b2e2a6 Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.063236 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.063328 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.063352 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpmpb\" (UniqueName: \"kubernetes.io/projected/7d117b07-cdb8-4d98-bd18-87d6511259af-kube-api-access-zpmpb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-d5brc\" (UID: \"7d117b07-cdb8-4d98-bd18-87d6511259af\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.063374 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvjj4\" (UniqueName: \"kubernetes.io/projected/6545794f-bb0e-4cb6-848b-436201e3af4f-kube-api-access-bvjj4\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.063592 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.063688 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:08.563664281 +0000 UTC m=+972.145619126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.063729 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.063771 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:08.563756583 +0000 UTC m=+972.145711428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "metrics-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.091062 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvjj4\" (UniqueName: \"kubernetes.io/projected/6545794f-bb0e-4cb6-848b-436201e3af4f-kube-api-access-bvjj4\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.092621 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpmpb\" (UniqueName: \"kubernetes.io/projected/7d117b07-cdb8-4d98-bd18-87d6511259af-kube-api-access-zpmpb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-d5brc\" (UID: \"7d117b07-cdb8-4d98-bd18-87d6511259af\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.108824 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" event={"ID":"4f286c1e-d207-47a0-86be-6711856071a7","Type":"ContainerStarted","Data":"507836a98e58d8390eceb759c7e3f4c0a437dc3549a4a0456b2b501ef58aed22"} Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.117542 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" event={"ID":"33f4dcd6-0eea-40f3-9968-458594d82013","Type":"ContainerStarted","Data":"e7ea7aa53bb3d84ac557d012eb1963c6b521ba0fec1e0f6076fd51e578b2e2a6"} Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.134322 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" event={"ID":"d03b5e78-a45c-49aa-8915-336be03c8c94","Type":"ContainerStarted","Data":"468dedad4dff3ccab4a0b7d04e45ed8000b19fc6f8d83ce6f9c4a3d105e4cd26"} Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.139156 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs"] Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.165100 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.165421 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.165473 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert podName:73445d4e-349f-4e37-a75d-44949a14db73 nodeName:}" failed. No retries permitted until 2026-02-18 19:50:09.165457568 +0000 UTC m=+972.747412413 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" (UID: "73445d4e-349f-4e37-a75d-44949a14db73") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.168795 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn"] Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.191047 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.193497 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.195135 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0fe77a1_c4a7_422f_b7c2_3062c2af1393.slice/crio-9ddbcd6cc848c003e8a7979ab620537eb2cc973bb24a978cc08edae9dcad0b2c WatchSource:0}: Error finding container 9ddbcd6cc848c003e8a7979ab620537eb2cc973bb24a978cc08edae9dcad0b2c: Status 404 returned error can't find the container with id 9ddbcd6cc848c003e8a7979ab620537eb2cc973bb24a978cc08edae9dcad0b2c Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.385882 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn"] Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.412765 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7"] Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.418136 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.465198 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffff0e6b_64e2_499f_8296_f374c5d62450.slice/crio-d0f897ba4525ef6554658953a828194254e35dd379efe8bbd5cbf6b47e4dd555 WatchSource:0}: Error finding container d0f897ba4525ef6554658953a828194254e35dd379efe8bbd5cbf6b47e4dd555: Status 404 returned error can't find the container with id d0f897ba4525ef6554658953a828194254e35dd379efe8bbd5cbf6b47e4dd555 Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.478374 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eb6df58_4273_41ac_8d6d_34d04a30adef.slice/crio-f660d40407f3cc6187fcbcfef12f703bc9f4c6eac4a6b53a951c2ed860f0384f WatchSource:0}: Error finding container f660d40407f3cc6187fcbcfef12f703bc9f4c6eac4a6b53a951c2ed860f0384f: Status 404 returned error can't find the container with id f660d40407f3cc6187fcbcfef12f703bc9f4c6eac4a6b53a951c2ed860f0384f Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.512251 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.524150 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0fc20a0_4c08_4552_be44_459c503d50c3.slice/crio-9d78d41a8c40277056de600c1b54d3a25321de416c4d536bbdcf205264742024 WatchSource:0}: Error finding container 9d78d41a8c40277056de600c1b54d3a25321de416c4d536bbdcf205264742024: Status 404 returned error can't find the container with id 9d78d41a8c40277056de600c1b54d3a25321de416c4d536bbdcf205264742024 Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.575450 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.575576 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.575708 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.575753 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:09.575739695 +0000 UTC m=+973.157694540 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.576107 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.576133 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:09.576126015 +0000 UTC m=+973.158080860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "metrics-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.626291 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x"] Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.632115 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.634143 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod191cd867_8aef_41cd_ae38_18b08d073f5d.slice/crio-8762ad162e0cd8666f12a4479ad9fd70ebb09e6167052664e041e0414a993c99 WatchSource:0}: Error finding container 8762ad162e0cd8666f12a4479ad9fd70ebb09e6167052664e041e0414a993c99: Status 404 returned error can't find the container with id 8762ad162e0cd8666f12a4479ad9fd70ebb09e6167052664e041e0414a993c99 Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.637287 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.637755 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27b507e8_a4b3_49cb_bef2_85a319a10257.slice/crio-9584f838a6e0fe8662f839c4b306fa2375447aa86a8667d7a9c4b37e9a27dda2 WatchSource:0}: Error finding container 9584f838a6e0fe8662f839c4b306fa2375447aa86a8667d7a9c4b37e9a27dda2: Status 404 returned error can't find the container with id 9584f838a6e0fe8662f839c4b306fa2375447aa86a8667d7a9c4b37e9a27dda2 Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.638594 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f1057b4_de48_4123_986c_795f9957899a.slice/crio-4b157105b0eeb85afad3cd2abf54e85dc2ba3273951e498dab089349eb453ae1 WatchSource:0}: Error finding container 4b157105b0eeb85afad3cd2abf54e85dc2ba3273951e498dab089349eb453ae1: Status 404 returned error can't find the container with id 4b157105b0eeb85afad3cd2abf54e85dc2ba3273951e498dab089349eb453ae1 Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.640550 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-94d6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5ddd85db87-wt2rd_openstack-operators(4f1057b4-de48-4123-986c-795f9957899a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.642437 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" podUID="4f1057b4-de48-4123-986c-795f9957899a" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.757122 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r"] Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.778354 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.782059 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd09a2660_c1e2_4305_b601_f9fb39b12ed9.slice/crio-7ab443a13b88d5c12a98d95e72ef7a785689a4589c00962c7735e57f05697d32 WatchSource:0}: Error finding container 7ab443a13b88d5c12a98d95e72ef7a785689a4589c00962c7735e57f05697d32: Status 404 returned error can't find the container with id 7ab443a13b88d5c12a98d95e72ef7a785689a4589c00962c7735e57f05697d32 Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.785016 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn"] Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.792031 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.793670 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f1309cd_f84d_48a6_a8bc_fd4f70307c12.slice/crio-1cb0d20898a8806332d87aa680db1617c80ceb32b03316ee072c92a5b5da0504 WatchSource:0}: Error finding container 1cb0d20898a8806332d87aa680db1617c80ceb32b03316ee072c92a5b5da0504: Status 404 returned error can't find the container with id 1cb0d20898a8806332d87aa680db1617c80ceb32b03316ee072c92a5b5da0504 Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.798473 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gq9mv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57bd55f9b7-t8b9r_openstack-operators(d09a2660-c1e2-4305-b601-f9fb39b12ed9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.799673 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" podUID="d09a2660-c1e2-4305-b601-f9fb39b12ed9" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.810031 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.810431 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.810499 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert podName:376e77a5-0e6f-4999-a037-96154984442f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:10.810464618 +0000 UTC m=+974.392419463 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert") pod "infra-operator-controller-manager-66d6b5f488-dv4j7" (UID: "376e77a5-0e6f-4999-a037-96154984442f") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.815438 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.58:5001/openstack-k8s-operators/watcher-operator:bccc5f477aecf1b112841224406211ceeff240ba,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-75kcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7fcbb7ddf5-xlhwm_openstack-operators(9f1309cd-f84d-48a6-a8bc-fd4f70307c12): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.815548 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z8dcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-56dc67d744-rw4dl_openstack-operators(52b91f42-32e6-4e15-887f-56098da3900b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.818602 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" podUID="52b91f42-32e6-4e15-887f-56098da3900b" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.818847 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" podUID="9f1309cd-f84d-48a6-a8bc-fd4f70307c12" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.865643 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz"] Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.880381 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-49ftt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8467ccb4c8-ts8dz_openstack-operators(6565f17b-d11e-4f28-bc32-f6e43062f81b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.881976 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" podUID="6565f17b-d11e-4f28-bc32-f6e43062f81b" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.897806 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.908578 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d117b07_cdb8_4d98_bd18_87d6511259af.slice/crio-73fe9d48f61803e493d76bc4de11b80d31b3efbeb15af691a97d62faa17e43c9 WatchSource:0}: Error finding container 73fe9d48f61803e493d76bc4de11b80d31b3efbeb15af691a97d62faa17e43c9: Status 404 returned error can't find the container with id 73fe9d48f61803e493d76bc4de11b80d31b3efbeb15af691a97d62faa17e43c9 Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.916324 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpmpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-d5brc_openstack-operators(7d117b07-cdb8-4d98-bd18-87d6511259af): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.917971 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" podUID="7d117b07-cdb8-4d98-bd18-87d6511259af" Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.156058 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" event={"ID":"a0fe77a1-c4a7-422f-b7c2-3062c2af1393","Type":"ContainerStarted","Data":"9ddbcd6cc848c003e8a7979ab620537eb2cc973bb24a978cc08edae9dcad0b2c"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.157652 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" event={"ID":"4f1057b4-de48-4123-986c-795f9957899a","Type":"ContainerStarted","Data":"4b157105b0eeb85afad3cd2abf54e85dc2ba3273951e498dab089349eb453ae1"} Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.159575 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" podUID="4f1057b4-de48-4123-986c-795f9957899a" Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.189937 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" event={"ID":"ffff0e6b-64e2-499f-8296-f374c5d62450","Type":"ContainerStarted","Data":"d0f897ba4525ef6554658953a828194254e35dd379efe8bbd5cbf6b47e4dd555"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.189987 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" event={"ID":"191cd867-8aef-41cd-ae38-18b08d073f5d","Type":"ContainerStarted","Data":"8762ad162e0cd8666f12a4479ad9fd70ebb09e6167052664e041e0414a993c99"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.190001 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" event={"ID":"9340dde2-09ac-43c0-ab0e-b2ce8ed53de0","Type":"ContainerStarted","Data":"f6252645b2b7572ffbe84333faa5c64fbaf250f9eb400dbe3894cf40398f3ff1"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.190010 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" event={"ID":"9647c082-6b36-4f38-b1fb-663f095997e9","Type":"ContainerStarted","Data":"c54637c0d17b95a210c8a23473047e2eb4d6f68a84916016223ed49d63d7fe85"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.190021 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" event={"ID":"4eb6df58-4273-41ac-8d6d-34d04a30adef","Type":"ContainerStarted","Data":"f660d40407f3cc6187fcbcfef12f703bc9f4c6eac4a6b53a951c2ed860f0384f"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.190456 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" event={"ID":"d0fc20a0-4c08-4552-be44-459c503d50c3","Type":"ContainerStarted","Data":"9d78d41a8c40277056de600c1b54d3a25321de416c4d536bbdcf205264742024"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.191589 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" event={"ID":"3967efad-3234-435e-b755-f684ffd74918","Type":"ContainerStarted","Data":"909acaeac0150e726816e54fdf2e638be37c3f5afd2973732684bd269fafa781"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.192865 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" event={"ID":"7d117b07-cdb8-4d98-bd18-87d6511259af","Type":"ContainerStarted","Data":"73fe9d48f61803e493d76bc4de11b80d31b3efbeb15af691a97d62faa17e43c9"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.194219 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" event={"ID":"6565f17b-d11e-4f28-bc32-f6e43062f81b","Type":"ContainerStarted","Data":"04f7dba8bd493f97fa2230d32e7a1a86b4b3bf952c7f245f066e7b375129f626"} Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.195429 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" podUID="7d117b07-cdb8-4d98-bd18-87d6511259af" Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.196428 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" podUID="6565f17b-d11e-4f28-bc32-f6e43062f81b" Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.205389 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" event={"ID":"59af7bc1-7774-4102-ae6c-2d7f820d3b93","Type":"ContainerStarted","Data":"b4a6cd93bfed5ed21418b5c1830c171b8dbb74c315729388d5073102959eba17"} Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.210962 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.58:5001/openstack-k8s-operators/watcher-operator:bccc5f477aecf1b112841224406211ceeff240ba\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" podUID="9f1309cd-f84d-48a6-a8bc-fd4f70307c12" Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.212775 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" podUID="52b91f42-32e6-4e15-887f-56098da3900b" Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.213784 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" podUID="d09a2660-c1e2-4305-b601-f9fb39b12ed9" Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.207894 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" event={"ID":"6b690eeb-2e37-49d8-9f44-9ca086aa2f00","Type":"ContainerStarted","Data":"b642512460248bc2b08067e00e663ae932b9fbf0fbf6c1d3cc0135252757086e"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.214106 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" event={"ID":"9f1309cd-f84d-48a6-a8bc-fd4f70307c12","Type":"ContainerStarted","Data":"1cb0d20898a8806332d87aa680db1617c80ceb32b03316ee072c92a5b5da0504"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.214145 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" event={"ID":"52b91f42-32e6-4e15-887f-56098da3900b","Type":"ContainerStarted","Data":"89b79ca88a564057e8c16954b8b9ef51642964daa2277fe2a3be9f44ec459a37"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.214162 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" event={"ID":"d09a2660-c1e2-4305-b601-f9fb39b12ed9","Type":"ContainerStarted","Data":"7ab443a13b88d5c12a98d95e72ef7a785689a4589c00962c7735e57f05697d32"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.214226 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" event={"ID":"27b507e8-a4b3-49cb-bef2-85a319a10257","Type":"ContainerStarted","Data":"9584f838a6e0fe8662f839c4b306fa2375447aa86a8667d7a9c4b37e9a27dda2"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.216405 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.218697 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.218744 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert podName:73445d4e-349f-4e37-a75d-44949a14db73 nodeName:}" failed. No retries permitted until 2026-02-18 19:50:11.218730845 +0000 UTC m=+974.800685690 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" (UID: "73445d4e-349f-4e37-a75d-44949a14db73") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.624800 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.624935 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.625019 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.625138 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.625155 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:11.625106416 +0000 UTC m=+975.207061331 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "webhook-server-cert" not found Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.625285 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:11.62526742 +0000 UTC m=+975.207222265 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "metrics-server-cert" not found Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.257330 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" podUID="d09a2660-c1e2-4305-b601-f9fb39b12ed9" Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.258898 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" podUID="6565f17b-d11e-4f28-bc32-f6e43062f81b" Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.261209 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" podUID="4f1057b4-de48-4123-986c-795f9957899a" Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.261254 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" podUID="52b91f42-32e6-4e15-887f-56098da3900b" Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.261280 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.58:5001/openstack-k8s-operators/watcher-operator:bccc5f477aecf1b112841224406211ceeff240ba\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" podUID="9f1309cd-f84d-48a6-a8bc-fd4f70307c12" Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.262526 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" podUID="7d117b07-cdb8-4d98-bd18-87d6511259af" Feb 18 19:50:10 crc kubenswrapper[4932]: I0218 19:50:10.847419 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.847591 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.847963 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert podName:376e77a5-0e6f-4999-a037-96154984442f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:14.847943639 +0000 UTC m=+978.429898484 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert") pod "infra-operator-controller-manager-66d6b5f488-dv4j7" (UID: "376e77a5-0e6f-4999-a037-96154984442f") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:11 crc kubenswrapper[4932]: I0218 19:50:11.253381 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:11 crc kubenswrapper[4932]: E0218 19:50:11.253513 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:11 crc kubenswrapper[4932]: E0218 19:50:11.253576 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert podName:73445d4e-349f-4e37-a75d-44949a14db73 nodeName:}" failed. No retries permitted until 2026-02-18 19:50:15.253557781 +0000 UTC m=+978.835512626 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" (UID: "73445d4e-349f-4e37-a75d-44949a14db73") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:11 crc kubenswrapper[4932]: I0218 19:50:11.658583 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:11 crc kubenswrapper[4932]: I0218 19:50:11.658753 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:11 crc kubenswrapper[4932]: E0218 19:50:11.658801 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:50:11 crc kubenswrapper[4932]: E0218 19:50:11.658892 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:15.658875416 +0000 UTC m=+979.240830261 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "webhook-server-cert" not found Feb 18 19:50:11 crc kubenswrapper[4932]: E0218 19:50:11.658900 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:50:11 crc kubenswrapper[4932]: E0218 19:50:11.658958 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:15.658940728 +0000 UTC m=+979.240895593 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "metrics-server-cert" not found Feb 18 19:50:14 crc kubenswrapper[4932]: I0218 19:50:14.918203 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:14 crc kubenswrapper[4932]: E0218 19:50:14.918455 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:14 crc kubenswrapper[4932]: E0218 19:50:14.918816 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert podName:376e77a5-0e6f-4999-a037-96154984442f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:22.918787681 +0000 UTC m=+986.500742556 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert") pod "infra-operator-controller-manager-66d6b5f488-dv4j7" (UID: "376e77a5-0e6f-4999-a037-96154984442f") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:15 crc kubenswrapper[4932]: I0218 19:50:15.331475 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:15 crc kubenswrapper[4932]: E0218 19:50:15.331727 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:15 crc kubenswrapper[4932]: E0218 19:50:15.331825 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert podName:73445d4e-349f-4e37-a75d-44949a14db73 nodeName:}" failed. No retries permitted until 2026-02-18 19:50:23.331798775 +0000 UTC m=+986.913753660 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" (UID: "73445d4e-349f-4e37-a75d-44949a14db73") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:15 crc kubenswrapper[4932]: I0218 19:50:15.737840 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:15 crc kubenswrapper[4932]: E0218 19:50:15.738116 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:50:15 crc kubenswrapper[4932]: E0218 19:50:15.738450 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:23.738418392 +0000 UTC m=+987.320373277 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "metrics-server-cert" not found Feb 18 19:50:15 crc kubenswrapper[4932]: I0218 19:50:15.739263 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:15 crc kubenswrapper[4932]: E0218 19:50:15.739482 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:50:15 crc kubenswrapper[4932]: E0218 19:50:15.739585 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:23.73956024 +0000 UTC m=+987.321515135 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "webhook-server-cert" not found Feb 18 19:50:20 crc kubenswrapper[4932]: E0218 19:50:20.232386 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89" Feb 18 19:50:20 crc kubenswrapper[4932]: E0218 19:50:20.232872 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvp48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54967dbbdf-4hrhw_openstack-operators(4eb6df58-4273-41ac-8d6d-34d04a30adef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:50:20 crc kubenswrapper[4932]: E0218 19:50:20.234113 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" podUID="4eb6df58-4273-41ac-8d6d-34d04a30adef" Feb 18 19:50:20 crc kubenswrapper[4932]: E0218 19:50:20.342558 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" podUID="4eb6df58-4273-41ac-8d6d-34d04a30adef" Feb 18 19:50:20 crc kubenswrapper[4932]: E0218 19:50:20.781363 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816" Feb 18 19:50:20 crc kubenswrapper[4932]: E0218 19:50:20.781565 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5wnsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-85c99d655-6k58x_openstack-operators(191cd867-8aef-41cd-ae38-18b08d073f5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:50:20 crc kubenswrapper[4932]: E0218 19:50:20.782809 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" podUID="191cd867-8aef-41cd-ae38-18b08d073f5d" Feb 18 19:50:21 crc kubenswrapper[4932]: E0218 19:50:21.334230 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d" Feb 18 19:50:21 crc kubenswrapper[4932]: E0218 19:50:21.334489 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ktdsl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-57746b5ff9-56fbf_openstack-operators(59af7bc1-7774-4102-ae6c-2d7f820d3b93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:50:21 crc kubenswrapper[4932]: E0218 19:50:21.335729 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" podUID="59af7bc1-7774-4102-ae6c-2d7f820d3b93" Feb 18 19:50:21 crc kubenswrapper[4932]: E0218 19:50:21.347576 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" podUID="59af7bc1-7774-4102-ae6c-2d7f820d3b93" Feb 18 19:50:21 crc kubenswrapper[4932]: E0218 19:50:21.348134 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" podUID="191cd867-8aef-41cd-ae38-18b08d073f5d" Feb 18 19:50:22 crc kubenswrapper[4932]: E0218 19:50:22.833724 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469" Feb 18 19:50:22 crc kubenswrapper[4932]: E0218 19:50:22.834123 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkvb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-6c78d668d5-m86tn_openstack-operators(9647c082-6b36-4f38-b1fb-663f095997e9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:50:22 crc kubenswrapper[4932]: E0218 19:50:22.835565 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" podUID="9647c082-6b36-4f38-b1fb-663f095997e9" Feb 18 19:50:22 crc kubenswrapper[4932]: I0218 19:50:22.949912 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:22 crc kubenswrapper[4932]: E0218 19:50:22.950069 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:22 crc kubenswrapper[4932]: E0218 19:50:22.950182 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert podName:376e77a5-0e6f-4999-a037-96154984442f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:38.950145747 +0000 UTC m=+1002.532100602 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert") pod "infra-operator-controller-manager-66d6b5f488-dv4j7" (UID: "376e77a5-0e6f-4999-a037-96154984442f") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:23 crc kubenswrapper[4932]: E0218 19:50:23.278551 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:00e0076b910b180d2ee76f7fa74f058fd1e2bee9e313f3a87c5f84bdd2600e2a" Feb 18 19:50:23 crc kubenswrapper[4932]: E0218 19:50:23.278719 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:00e0076b910b180d2ee76f7fa74f058fd1e2bee9e313f3a87c5f84bdd2600e2a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxs72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-54fb488b88-4m7xr_openstack-operators(3967efad-3234-435e-b755-f684ffd74918): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:50:23 crc kubenswrapper[4932]: E0218 19:50:23.279828 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" podUID="3967efad-3234-435e-b755-f684ffd74918" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.355976 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.363856 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:23 crc kubenswrapper[4932]: E0218 19:50:23.384516 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" podUID="9647c082-6b36-4f38-b1fb-663f095997e9" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.388325 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:23 crc kubenswrapper[4932]: E0218 19:50:23.393694 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:00e0076b910b180d2ee76f7fa74f058fd1e2bee9e313f3a87c5f84bdd2600e2a\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" podUID="3967efad-3234-435e-b755-f684ffd74918" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.766116 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.766531 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.772061 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.775062 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.845534 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9"] Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.058091 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.403405 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" event={"ID":"a0fe77a1-c4a7-422f-b7c2-3062c2af1393","Type":"ContainerStarted","Data":"733a9d1a44e9a682b945b22ef5e79205bff20c260b3d0d97498686fbbc646da2"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.403778 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.405664 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" event={"ID":"d0fc20a0-4c08-4552-be44-459c503d50c3","Type":"ContainerStarted","Data":"a545682e8d8422d1a3126d8cf777a1aea7727035549b1d20359506b1ade75484"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.406076 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.407153 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" event={"ID":"9340dde2-09ac-43c0-ab0e-b2ce8ed53de0","Type":"ContainerStarted","Data":"681d9468c6e7659152849f2c97a567529e41c4387edcd23775f7036224273257"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.407516 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.413366 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" event={"ID":"33f4dcd6-0eea-40f3-9968-458594d82013","Type":"ContainerStarted","Data":"7fb1a3a85285deb683da88ce039aaabecb5959455c349532bbfbfc93a9df50cc"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.413460 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.417642 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" event={"ID":"27b507e8-a4b3-49cb-bef2-85a319a10257","Type":"ContainerStarted","Data":"af559c036304a1c6bcea9be8b47f693d13662c1daa5fa882c9026ad81a8f8abd"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.418240 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.424308 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" event={"ID":"d03b5e78-a45c-49aa-8915-336be03c8c94","Type":"ContainerStarted","Data":"7d2d42a4c0efebac458437ba4d27f1dd825f3649fbb53e22edd5be8a2590eb4c"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.425106 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.426736 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" podStartSLOduration=3.316008031 podStartE2EDuration="18.426716261s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.210828366 +0000 UTC m=+971.792783211" lastFinishedPulling="2026-02-18 19:50:23.321536596 +0000 UTC m=+986.903491441" observedRunningTime="2026-02-18 19:50:24.423634385 +0000 UTC m=+988.005589230" watchObservedRunningTime="2026-02-18 19:50:24.426716261 +0000 UTC m=+988.008671106" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.430888 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" event={"ID":"73445d4e-349f-4e37-a75d-44949a14db73","Type":"ContainerStarted","Data":"789734b0e05fca0142cb2c956b46e3d6f5ba46c8eff8bc6c672afee877d82ed5"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.432848 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" event={"ID":"ffff0e6b-64e2-499f-8296-f374c5d62450","Type":"ContainerStarted","Data":"92e84309122038c81174fd17412a316dc294db8340cdc3dd56ab0af0b29a8ad1"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.433522 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.437403 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" event={"ID":"6b690eeb-2e37-49d8-9f44-9ca086aa2f00","Type":"ContainerStarted","Data":"7079b445c0febb98258065446880f5675a69d5feb7711237795273fbe1fd642d"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.437569 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.443945 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" event={"ID":"4f286c1e-d207-47a0-86be-6711856071a7","Type":"ContainerStarted","Data":"c4709d551b5fdefc852a7e39f4523756300d7ccd6a833f54b8cfd4efc562db03"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.445536 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.447978 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" podStartSLOduration=2.658469886 podStartE2EDuration="17.447957904s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.52641599 +0000 UTC m=+972.108370835" lastFinishedPulling="2026-02-18 19:50:23.315904018 +0000 UTC m=+986.897858853" observedRunningTime="2026-02-18 19:50:24.442953731 +0000 UTC m=+988.024908576" watchObservedRunningTime="2026-02-18 19:50:24.447957904 +0000 UTC m=+988.029912749" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.467744 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" podStartSLOduration=2.75196737 podStartE2EDuration="17.467727191s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.640154982 +0000 UTC m=+972.222109827" lastFinishedPulling="2026-02-18 19:50:23.355914813 +0000 UTC m=+986.937869648" observedRunningTime="2026-02-18 19:50:24.466540712 +0000 UTC m=+988.048495577" watchObservedRunningTime="2026-02-18 19:50:24.467727191 +0000 UTC m=+988.049682036" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.491290 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" podStartSLOduration=3.2271993820000002 podStartE2EDuration="18.491272831s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.05228067 +0000 UTC m=+971.634235515" lastFinishedPulling="2026-02-18 19:50:23.316354099 +0000 UTC m=+986.898308964" observedRunningTime="2026-02-18 19:50:24.487480298 +0000 UTC m=+988.069435143" watchObservedRunningTime="2026-02-18 19:50:24.491272831 +0000 UTC m=+988.073227676" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.510129 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" podStartSLOduration=2.977754532 podStartE2EDuration="17.510111845s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.784257222 +0000 UTC m=+972.366212067" lastFinishedPulling="2026-02-18 19:50:23.316614535 +0000 UTC m=+986.898569380" observedRunningTime="2026-02-18 19:50:24.509433889 +0000 UTC m=+988.091388744" watchObservedRunningTime="2026-02-18 19:50:24.510111845 +0000 UTC m=+988.092066690" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.523520 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" podStartSLOduration=3.166958339 podStartE2EDuration="18.523498785s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:07.955812214 +0000 UTC m=+971.537767059" lastFinishedPulling="2026-02-18 19:50:23.31235265 +0000 UTC m=+986.894307505" observedRunningTime="2026-02-18 19:50:24.523393163 +0000 UTC m=+988.105348018" watchObservedRunningTime="2026-02-18 19:50:24.523498785 +0000 UTC m=+988.105453630" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.578385 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" podStartSLOduration=3.496869636 podStartE2EDuration="18.578368247s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.234446928 +0000 UTC m=+971.816401773" lastFinishedPulling="2026-02-18 19:50:23.315945539 +0000 UTC m=+986.897900384" observedRunningTime="2026-02-18 19:50:24.57485308 +0000 UTC m=+988.156807925" watchObservedRunningTime="2026-02-18 19:50:24.578368247 +0000 UTC m=+988.160323092" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.597157 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" podStartSLOduration=2.75116653 podStartE2EDuration="17.597132569s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.477013193 +0000 UTC m=+972.058968038" lastFinishedPulling="2026-02-18 19:50:23.322979232 +0000 UTC m=+986.904934077" observedRunningTime="2026-02-18 19:50:24.592868954 +0000 UTC m=+988.174823799" watchObservedRunningTime="2026-02-18 19:50:24.597132569 +0000 UTC m=+988.179087414" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.613930 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" podStartSLOduration=3.348424709 podStartE2EDuration="18.613911222s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.050406494 +0000 UTC m=+971.632361339" lastFinishedPulling="2026-02-18 19:50:23.315892997 +0000 UTC m=+986.897847852" observedRunningTime="2026-02-18 19:50:24.613051821 +0000 UTC m=+988.195006666" watchObservedRunningTime="2026-02-18 19:50:24.613911222 +0000 UTC m=+988.195866077" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.633327 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z"] Feb 18 19:50:24 crc kubenswrapper[4932]: W0218 19:50:24.669073 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6545794f_bb0e_4cb6_848b_436201e3af4f.slice/crio-3b7e14e6736528fbf97324bcd2ee44eccaca9d1d0317b0426cca5f6afe9da258 WatchSource:0}: Error finding container 3b7e14e6736528fbf97324bcd2ee44eccaca9d1d0317b0426cca5f6afe9da258: Status 404 returned error can't find the container with id 3b7e14e6736528fbf97324bcd2ee44eccaca9d1d0317b0426cca5f6afe9da258 Feb 18 19:50:25 crc kubenswrapper[4932]: I0218 19:50:25.452633 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" event={"ID":"6545794f-bb0e-4cb6-848b-436201e3af4f","Type":"ContainerStarted","Data":"65cd089d368629d13519e4dc731dd3400379eb2e810a849e7e631a519fdba06b"} Feb 18 19:50:25 crc kubenswrapper[4932]: I0218 19:50:25.452880 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" event={"ID":"6545794f-bb0e-4cb6-848b-436201e3af4f","Type":"ContainerStarted","Data":"3b7e14e6736528fbf97324bcd2ee44eccaca9d1d0317b0426cca5f6afe9da258"} Feb 18 19:50:25 crc kubenswrapper[4932]: I0218 19:50:25.454089 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:25 crc kubenswrapper[4932]: I0218 19:50:25.488593 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" podStartSLOduration=18.488573229 podStartE2EDuration="18.488573229s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:50:25.477208849 +0000 UTC m=+989.059163714" watchObservedRunningTime="2026-02-18 19:50:25.488573229 +0000 UTC m=+989.070528074" Feb 18 19:50:34 crc kubenswrapper[4932]: I0218 19:50:34.074586 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.111141 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.145764 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.199523 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.239715 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.423483 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.590073 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.738121 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.738293 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.840428 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" Feb 18 19:50:39 crc kubenswrapper[4932]: I0218 19:50:39.000236 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:39 crc kubenswrapper[4932]: I0218 19:50:39.005898 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:39 crc kubenswrapper[4932]: I0218 19:50:39.118841 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:42 crc kubenswrapper[4932]: E0218 19:50:42.845546 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89" Feb 18 19:50:42 crc kubenswrapper[4932]: E0218 19:50:42.846067 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gq9mv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57bd55f9b7-t8b9r_openstack-operators(d09a2660-c1e2-4305-b601-f9fb39b12ed9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:50:42 crc kubenswrapper[4932]: E0218 19:50:42.847245 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" podUID="d09a2660-c1e2-4305-b601-f9fb39b12ed9" Feb 18 19:50:44 crc kubenswrapper[4932]: E0218 19:50:44.720503 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d" Feb 18 19:50:44 crc kubenswrapper[4932]: E0218 19:50:44.720991 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ktdsl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-57746b5ff9-56fbf_openstack-operators(59af7bc1-7774-4102-ae6c-2d7f820d3b93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:50:44 crc kubenswrapper[4932]: E0218 19:50:44.722295 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" podUID="59af7bc1-7774-4102-ae6c-2d7f820d3b93" Feb 18 19:50:45 crc kubenswrapper[4932]: I0218 19:50:45.776278 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7"] Feb 18 19:50:45 crc kubenswrapper[4932]: W0218 19:50:45.810660 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod376e77a5_0e6f_4999_a037_96154984442f.slice/crio-b1c697ac065f55c0a882531ee8f3b109cc02940db50cd305170f993a7a4f767a WatchSource:0}: Error finding container b1c697ac065f55c0a882531ee8f3b109cc02940db50cd305170f993a7a4f767a: Status 404 returned error can't find the container with id b1c697ac065f55c0a882531ee8f3b109cc02940db50cd305170f993a7a4f767a Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.611249 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" event={"ID":"7d117b07-cdb8-4d98-bd18-87d6511259af","Type":"ContainerStarted","Data":"de167f08b4b55de274dc6521f592f3dee703e3c67b177dc47133e5bf08bd181e"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.614143 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" event={"ID":"73445d4e-349f-4e37-a75d-44949a14db73","Type":"ContainerStarted","Data":"4408289fc9349fb32cacbd1a2cafce9967888a2b8961294417d56dbafadba8b6"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.614229 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.615784 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" event={"ID":"4f1057b4-de48-4123-986c-795f9957899a","Type":"ContainerStarted","Data":"81107fe48eaa2a2b6ce420585d8ab6381c6a6a8bd5f76fa2f82145d17ba0a2a0"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.615963 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.617080 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" event={"ID":"376e77a5-0e6f-4999-a037-96154984442f","Type":"ContainerStarted","Data":"b1c697ac065f55c0a882531ee8f3b109cc02940db50cd305170f993a7a4f767a"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.618329 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" event={"ID":"6565f17b-d11e-4f28-bc32-f6e43062f81b","Type":"ContainerStarted","Data":"5ee2c195f8d7176fe0436f657cd8c7df8fd514e879d3c3b20dd00948fb14e37f"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.618497 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.619303 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" event={"ID":"191cd867-8aef-41cd-ae38-18b08d073f5d","Type":"ContainerStarted","Data":"2a1414219dedda1cca9406a393bf90f333fd52ca579e9ff594aeedf7547c234d"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.619476 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.620433 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" event={"ID":"4eb6df58-4273-41ac-8d6d-34d04a30adef","Type":"ContainerStarted","Data":"47b401148e026022a236089acf4941c77a21fd3f60222aed4b784b68f38d3642"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.620647 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.622287 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" event={"ID":"52b91f42-32e6-4e15-887f-56098da3900b","Type":"ContainerStarted","Data":"f2402a314bbb251296fb89c887ae805e1cdaa206060223e8555ac386b731d163"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.622416 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.624610 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" podStartSLOduration=3.183036819 podStartE2EDuration="39.624600548s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.916143931 +0000 UTC m=+972.498098776" lastFinishedPulling="2026-02-18 19:50:45.35770762 +0000 UTC m=+1008.939662505" observedRunningTime="2026-02-18 19:50:46.623734997 +0000 UTC m=+1010.205689842" watchObservedRunningTime="2026-02-18 19:50:46.624600548 +0000 UTC m=+1010.206555393" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.630507 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" event={"ID":"3967efad-3234-435e-b755-f684ffd74918","Type":"ContainerStarted","Data":"73a61ca2035c140cb774984c28e792f84f9f9126a13c137b4f50b161e00b88da"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.630739 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.632514 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" event={"ID":"9f1309cd-f84d-48a6-a8bc-fd4f70307c12","Type":"ContainerStarted","Data":"c30ca0181129682e02046bbab9351f2a35066687e2829ea3bcb6252fe033efd9"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.632802 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.634186 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" event={"ID":"9647c082-6b36-4f38-b1fb-663f095997e9","Type":"ContainerStarted","Data":"18158043147a1c6fc3f911389f310dbde99162b08c2c7593ece0c2f94d0c4a9a"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.634383 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.660838 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" podStartSLOduration=5.72221832 podStartE2EDuration="39.66081715s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.640430979 +0000 UTC m=+972.222385824" lastFinishedPulling="2026-02-18 19:50:42.579029809 +0000 UTC m=+1006.160984654" observedRunningTime="2026-02-18 19:50:46.638987913 +0000 UTC m=+1010.220942758" watchObservedRunningTime="2026-02-18 19:50:46.66081715 +0000 UTC m=+1010.242771995" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.676510 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" podStartSLOduration=18.19104588 podStartE2EDuration="39.676486976s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:23.860101824 +0000 UTC m=+987.442056669" lastFinishedPulling="2026-02-18 19:50:45.34554288 +0000 UTC m=+1008.927497765" observedRunningTime="2026-02-18 19:50:46.675092822 +0000 UTC m=+1010.257047677" watchObservedRunningTime="2026-02-18 19:50:46.676486976 +0000 UTC m=+1010.258441831" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.699285 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" podStartSLOduration=2.822817026 podStartE2EDuration="39.699250757s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.481216647 +0000 UTC m=+972.063171492" lastFinishedPulling="2026-02-18 19:50:45.357650368 +0000 UTC m=+1008.939605223" observedRunningTime="2026-02-18 19:50:46.69816101 +0000 UTC m=+1010.280115855" watchObservedRunningTime="2026-02-18 19:50:46.699250757 +0000 UTC m=+1010.281205602" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.716245 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" podStartSLOduration=14.759504586 podStartE2EDuration="39.716228915s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.815376569 +0000 UTC m=+972.397331414" lastFinishedPulling="2026-02-18 19:50:33.772100898 +0000 UTC m=+997.354055743" observedRunningTime="2026-02-18 19:50:46.71152528 +0000 UTC m=+1010.293480125" watchObservedRunningTime="2026-02-18 19:50:46.716228915 +0000 UTC m=+1010.298183760" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.731624 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" podStartSLOduration=2.76489005 podStartE2EDuration="39.731607354s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.637049236 +0000 UTC m=+972.219004081" lastFinishedPulling="2026-02-18 19:50:45.60376655 +0000 UTC m=+1009.185721385" observedRunningTime="2026-02-18 19:50:46.730448476 +0000 UTC m=+1010.312403321" watchObservedRunningTime="2026-02-18 19:50:46.731607354 +0000 UTC m=+1010.313562199" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.746531 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" podStartSLOduration=3.275944509 podStartE2EDuration="39.746517372s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.880210826 +0000 UTC m=+972.462165671" lastFinishedPulling="2026-02-18 19:50:45.350783679 +0000 UTC m=+1008.932738534" observedRunningTime="2026-02-18 19:50:46.743478227 +0000 UTC m=+1010.325433062" watchObservedRunningTime="2026-02-18 19:50:46.746517372 +0000 UTC m=+1010.328472217" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.766663 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" podStartSLOduration=3.39224931 podStartE2EDuration="40.766647808s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.246984376 +0000 UTC m=+971.828939221" lastFinishedPulling="2026-02-18 19:50:45.621382854 +0000 UTC m=+1009.203337719" observedRunningTime="2026-02-18 19:50:46.763479839 +0000 UTC m=+1010.345434684" watchObservedRunningTime="2026-02-18 19:50:46.766647808 +0000 UTC m=+1010.348602653" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.809266 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" podStartSLOduration=3.276176263 podStartE2EDuration="39.809251937s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.815286736 +0000 UTC m=+972.397241571" lastFinishedPulling="2026-02-18 19:50:45.34836239 +0000 UTC m=+1008.930317245" observedRunningTime="2026-02-18 19:50:46.802983653 +0000 UTC m=+1010.384938498" watchObservedRunningTime="2026-02-18 19:50:46.809251937 +0000 UTC m=+1010.391206782" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.818980 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" podStartSLOduration=2.762814567 podStartE2EDuration="39.818964926s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.405744938 +0000 UTC m=+971.987699783" lastFinishedPulling="2026-02-18 19:50:45.461895297 +0000 UTC m=+1009.043850142" observedRunningTime="2026-02-18 19:50:46.816394323 +0000 UTC m=+1010.398349178" watchObservedRunningTime="2026-02-18 19:50:46.818964926 +0000 UTC m=+1010.400919771" Feb 18 19:50:48 crc kubenswrapper[4932]: I0218 19:50:48.653117 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" event={"ID":"376e77a5-0e6f-4999-a037-96154984442f","Type":"ContainerStarted","Data":"ea0442f6ccccd1eb24ac1bc1ae00a7a29b4eb24d2c1d407438f6f49e47cdaeb0"} Feb 18 19:50:48 crc kubenswrapper[4932]: I0218 19:50:48.653665 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:48 crc kubenswrapper[4932]: I0218 19:50:48.671886 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" podStartSLOduration=40.193013356 podStartE2EDuration="42.671868312s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:45.827472651 +0000 UTC m=+1009.409427496" lastFinishedPulling="2026-02-18 19:50:48.306327567 +0000 UTC m=+1011.888282452" observedRunningTime="2026-02-18 19:50:48.669058613 +0000 UTC m=+1012.251013468" watchObservedRunningTime="2026-02-18 19:50:48.671868312 +0000 UTC m=+1012.253823177" Feb 18 19:50:53 crc kubenswrapper[4932]: I0218 19:50:53.397705 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:54 crc kubenswrapper[4932]: E0218 19:50:54.182041 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" podUID="d09a2660-c1e2-4305-b601-f9fb39b12ed9" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.581478 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.639749 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.678694 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.737133 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.827783 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.943690 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.950437 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.970543 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" Feb 18 19:50:59 crc kubenswrapper[4932]: I0218 19:50:59.126343 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:51:00 crc kubenswrapper[4932]: E0218 19:51:00.181990 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" podUID="59af7bc1-7774-4102-ae6c-2d7f820d3b93" Feb 18 19:51:08 crc kubenswrapper[4932]: I0218 19:51:08.817379 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" event={"ID":"d09a2660-c1e2-4305-b601-f9fb39b12ed9","Type":"ContainerStarted","Data":"dcb57fcd46fef995fbf922fe40b2c085bb9b037a023c91fd3d7ffc176245ba93"} Feb 18 19:51:08 crc kubenswrapper[4932]: I0218 19:51:08.818142 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" Feb 18 19:51:08 crc kubenswrapper[4932]: I0218 19:51:08.838601 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" podStartSLOduration=2.761789623 podStartE2EDuration="1m1.838583113s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.798303128 +0000 UTC m=+972.380257973" lastFinishedPulling="2026-02-18 19:51:07.875096578 +0000 UTC m=+1031.457051463" observedRunningTime="2026-02-18 19:51:08.836916232 +0000 UTC m=+1032.418871087" watchObservedRunningTime="2026-02-18 19:51:08.838583113 +0000 UTC m=+1032.420537958" Feb 18 19:51:11 crc kubenswrapper[4932]: I0218 19:51:11.853437 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" event={"ID":"59af7bc1-7774-4102-ae6c-2d7f820d3b93","Type":"ContainerStarted","Data":"fb40dce15dd57eb5455d8ea0d14849574a9a7c8e57b6995459fa9f81a80fc3a2"} Feb 18 19:51:11 crc kubenswrapper[4932]: I0218 19:51:11.853962 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" Feb 18 19:51:11 crc kubenswrapper[4932]: I0218 19:51:11.878341 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" podStartSLOduration=2.341153556 podStartE2EDuration="1m5.878319595s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.116957883 +0000 UTC m=+971.698912728" lastFinishedPulling="2026-02-18 19:51:11.654123922 +0000 UTC m=+1035.236078767" observedRunningTime="2026-02-18 19:51:11.872572483 +0000 UTC m=+1035.454527338" watchObservedRunningTime="2026-02-18 19:51:11.878319595 +0000 UTC m=+1035.460274450" Feb 18 19:51:17 crc kubenswrapper[4932]: I0218 19:51:17.125942 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" Feb 18 19:51:17 crc kubenswrapper[4932]: I0218 19:51:17.843049 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" Feb 18 19:51:35 crc kubenswrapper[4932]: I0218 19:51:35.987076 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d46db5bb7-js9zs"] Feb 18 19:51:35 crc kubenswrapper[4932]: I0218 19:51:35.990519 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:35 crc kubenswrapper[4932]: I0218 19:51:35.994012 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 18 19:51:35 crc kubenswrapper[4932]: I0218 19:51:35.994073 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 18 19:51:35 crc kubenswrapper[4932]: I0218 19:51:35.994305 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 18 19:51:35 crc kubenswrapper[4932]: I0218 19:51:35.994741 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7qdlp" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.002103 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d46db5bb7-js9zs"] Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.021639 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59c78cff8f-mnmbx"] Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.022975 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.025857 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.036143 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c78cff8f-mnmbx"] Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.052801 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fccb0fa8-b88d-469c-b88e-838aa9f5d481-config\") pod \"dnsmasq-dns-5d46db5bb7-js9zs\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.052883 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4k8g\" (UniqueName: \"kubernetes.io/projected/fccb0fa8-b88d-469c-b88e-838aa9f5d481-kube-api-access-b4k8g\") pod \"dnsmasq-dns-5d46db5bb7-js9zs\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.153855 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-dns-svc\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.154002 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fccb0fa8-b88d-469c-b88e-838aa9f5d481-config\") pod \"dnsmasq-dns-5d46db5bb7-js9zs\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.154141 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxps7\" (UniqueName: \"kubernetes.io/projected/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-kube-api-access-jxps7\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.154256 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4k8g\" (UniqueName: \"kubernetes.io/projected/fccb0fa8-b88d-469c-b88e-838aa9f5d481-kube-api-access-b4k8g\") pod \"dnsmasq-dns-5d46db5bb7-js9zs\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.154325 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-config\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.155226 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fccb0fa8-b88d-469c-b88e-838aa9f5d481-config\") pod \"dnsmasq-dns-5d46db5bb7-js9zs\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.173009 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4k8g\" (UniqueName: \"kubernetes.io/projected/fccb0fa8-b88d-469c-b88e-838aa9f5d481-kube-api-access-b4k8g\") pod \"dnsmasq-dns-5d46db5bb7-js9zs\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.255132 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxps7\" (UniqueName: \"kubernetes.io/projected/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-kube-api-access-jxps7\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.255228 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-config\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.255251 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-dns-svc\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.256047 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-config\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.256090 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-dns-svc\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.272440 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxps7\" (UniqueName: \"kubernetes.io/projected/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-kube-api-access-jxps7\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.357465 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.365941 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.732223 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d46db5bb7-js9zs"] Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.767047 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c78cff8f-mnmbx"] Feb 18 19:51:36 crc kubenswrapper[4932]: W0218 19:51:36.772336 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bac8c90_8ad0_4e01_8434_92f4bc659e1d.slice/crio-2d0b3b915f083e7565ab24eb11a22c59b965e4e0d82849dbc4f9e1e4e3b64b15 WatchSource:0}: Error finding container 2d0b3b915f083e7565ab24eb11a22c59b965e4e0d82849dbc4f9e1e4e3b64b15: Status 404 returned error can't find the container with id 2d0b3b915f083e7565ab24eb11a22c59b965e4e0d82849dbc4f9e1e4e3b64b15 Feb 18 19:51:37 crc kubenswrapper[4932]: I0218 19:51:37.055240 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" event={"ID":"9bac8c90-8ad0-4e01-8434-92f4bc659e1d","Type":"ContainerStarted","Data":"2d0b3b915f083e7565ab24eb11a22c59b965e4e0d82849dbc4f9e1e4e3b64b15"} Feb 18 19:51:37 crc kubenswrapper[4932]: I0218 19:51:37.056145 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" event={"ID":"fccb0fa8-b88d-469c-b88e-838aa9f5d481","Type":"ContainerStarted","Data":"46532bab9d5422ae97530391ba7e12cbc323bc5e7eec881c2be6645f3ff80478"} Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.543342 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c78cff8f-mnmbx"] Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.569752 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57dc99974f-qvkx9"] Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.571220 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.580623 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57dc99974f-qvkx9"] Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.750310 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-dns-svc\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.750412 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j8m6\" (UniqueName: \"kubernetes.io/projected/ab397921-9519-48e8-a5c0-5c388d54b6cd-kube-api-access-5j8m6\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.750578 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-config\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.852957 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-dns-svc\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.853007 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j8m6\" (UniqueName: \"kubernetes.io/projected/ab397921-9519-48e8-a5c0-5c388d54b6cd-kube-api-access-5j8m6\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.853045 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-config\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.854072 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-dns-svc\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.854513 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-config\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.865541 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d46db5bb7-js9zs"] Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.883806 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j8m6\" (UniqueName: \"kubernetes.io/projected/ab397921-9519-48e8-a5c0-5c388d54b6cd-kube-api-access-5j8m6\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.919840 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b9746b6c-vpbf8"] Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.923552 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.942780 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.947053 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9746b6c-vpbf8"] Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.057153 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-config\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.057226 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgghm\" (UniqueName: \"kubernetes.io/projected/ca226f67-28b6-4585-a6ed-7d4394cc2a15-kube-api-access-jgghm\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.057257 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-dns-svc\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.158399 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-dns-svc\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.158491 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-config\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.158533 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgghm\" (UniqueName: \"kubernetes.io/projected/ca226f67-28b6-4585-a6ed-7d4394cc2a15-kube-api-access-jgghm\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.159531 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-dns-svc\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.160033 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-config\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.183168 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgghm\" (UniqueName: \"kubernetes.io/projected/ca226f67-28b6-4585-a6ed-7d4394cc2a15-kube-api-access-jgghm\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.245830 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.256118 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57dc99974f-qvkx9"] Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.280891 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-668d7c8657-fkpfr"] Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.285548 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.317538 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-668d7c8657-fkpfr"] Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.464289 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-dns-svc\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.464616 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-config\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.464660 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t97ph\" (UniqueName: \"kubernetes.io/projected/7182e8ba-c70f-44ce-b628-21107829cb83-kube-api-access-t97ph\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.565704 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-dns-svc\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.565780 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-config\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.565829 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t97ph\" (UniqueName: \"kubernetes.io/projected/7182e8ba-c70f-44ce-b628-21107829cb83-kube-api-access-t97ph\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.566543 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-dns-svc\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.566549 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-config\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.598041 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t97ph\" (UniqueName: \"kubernetes.io/projected/7182e8ba-c70f-44ce-b628-21107829cb83-kube-api-access-t97ph\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.619215 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.719820 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.721286 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.728857 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.729152 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.729447 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.729537 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ptcgt" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.729856 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.729989 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.730724 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.738482 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869711 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869759 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869780 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869805 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869821 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869857 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869876 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869908 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869926 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869960 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869975 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dtlp\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-kube-api-access-2dtlp\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971109 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971186 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971236 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971262 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971313 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971335 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dtlp\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-kube-api-access-2dtlp\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971362 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971400 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971422 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971456 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971478 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971661 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971741 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.972008 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.972287 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.972817 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.974060 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.974639 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.975159 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.975636 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.979374 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.990519 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.993037 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dtlp\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-kube-api-access-2dtlp\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.094629 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.095966 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.097595 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.097759 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.098137 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.098838 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.098945 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.099065 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.099290 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-l229h" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.110121 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.112951 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.173772 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.173840 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.173897 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd547864-4d03-45ae-8bb1-10a360d36599-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.173981 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.174016 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd547864-4d03-45ae-8bb1-10a360d36599-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.174046 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.174081 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.174116 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.174236 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.174285 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.174355 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwqrq\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-kube-api-access-fwqrq\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.276072 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.276516 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.276666 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwqrq\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-kube-api-access-fwqrq\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.276784 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.276881 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.276986 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd547864-4d03-45ae-8bb1-10a360d36599-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.277133 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.277298 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.277307 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd547864-4d03-45ae-8bb1-10a360d36599-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.277428 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.277480 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.277509 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.278026 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.279702 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.281272 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.282886 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.283808 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.284917 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd547864-4d03-45ae-8bb1-10a360d36599-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.291018 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd547864-4d03-45ae-8bb1-10a360d36599-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.291658 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.294685 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.295872 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.303487 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwqrq\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-kube-api-access-fwqrq\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.422866 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.423006 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.423995 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.427914 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.428126 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-jc7nx" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.428283 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.428508 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.428526 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.428626 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.428787 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.442440 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.480533 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.480651 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.480825 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.480874 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.480912 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgn8m\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-kube-api-access-pgn8m\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.480989 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.481020 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.481049 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.481078 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.481118 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4a133994-7b33-4db4-a923-5b90d51e47b9-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.481311 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4a133994-7b33-4db4-a923-5b90d51e47b9-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582303 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582379 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582404 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582420 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582450 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4a133994-7b33-4db4-a923-5b90d51e47b9-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582479 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4a133994-7b33-4db4-a923-5b90d51e47b9-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582521 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582536 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582569 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582587 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582576 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582605 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgn8m\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-kube-api-access-pgn8m\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.583614 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.583928 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.584713 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.584994 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.585467 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.587612 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4a133994-7b33-4db4-a923-5b90d51e47b9-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.587740 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.587903 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.590626 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4a133994-7b33-4db4-a923-5b90d51e47b9-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.602106 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgn8m\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-kube-api-access-pgn8m\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.607814 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.757519 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.790760 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.793774 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.796511 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.796594 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-wgrhb" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.796879 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.810926 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.811665 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.811850 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908314 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908383 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/915c727d-cb48-4649-bd71-30a5edf798d5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908406 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c727d-cb48-4649-bd71-30a5edf798d5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908564 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-config-data-default\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908721 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-kolla-config\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908764 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/915c727d-cb48-4649-bd71-30a5edf798d5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908817 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908923 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsxhw\" (UniqueName: \"kubernetes.io/projected/915c727d-cb48-4649-bd71-30a5edf798d5-kube-api-access-qsxhw\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.009828 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-kolla-config\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.009873 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/915c727d-cb48-4649-bd71-30a5edf798d5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.009899 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.009951 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsxhw\" (UniqueName: \"kubernetes.io/projected/915c727d-cb48-4649-bd71-30a5edf798d5-kube-api-access-qsxhw\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.009989 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.010005 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/915c727d-cb48-4649-bd71-30a5edf798d5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.010023 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c727d-cb48-4649-bd71-30a5edf798d5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.010040 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-config-data-default\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.010798 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.011232 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-kolla-config\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.011494 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-config-data-default\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.011640 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/915c727d-cb48-4649-bd71-30a5edf798d5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.012319 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.018212 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c727d-cb48-4649-bd71-30a5edf798d5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.023744 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/915c727d-cb48-4649-bd71-30a5edf798d5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.030621 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsxhw\" (UniqueName: \"kubernetes.io/projected/915c727d-cb48-4649-bd71-30a5edf798d5-kube-api-access-qsxhw\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.041369 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.133033 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.176461 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.177663 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.183238 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6r68t" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.185230 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.185413 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.187991 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.210594 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.227923 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9dd7155-a814-4ae0-92b9-6e71461473d5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.227985 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.228012 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9dd7155-a814-4ae0-92b9-6e71461473d5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.228057 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.228122 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.228151 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpqpw\" (UniqueName: \"kubernetes.io/projected/d9dd7155-a814-4ae0-92b9-6e71461473d5-kube-api-access-dpqpw\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.228204 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9dd7155-a814-4ae0-92b9-6e71461473d5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.228271 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329657 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9dd7155-a814-4ae0-92b9-6e71461473d5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329705 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329723 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9dd7155-a814-4ae0-92b9-6e71461473d5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329750 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329782 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329796 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpqpw\" (UniqueName: \"kubernetes.io/projected/d9dd7155-a814-4ae0-92b9-6e71461473d5-kube-api-access-dpqpw\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329823 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9dd7155-a814-4ae0-92b9-6e71461473d5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329868 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.330140 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.333032 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.333960 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.335380 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9dd7155-a814-4ae0-92b9-6e71461473d5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.335398 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.335760 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9dd7155-a814-4ae0-92b9-6e71461473d5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.341692 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9dd7155-a814-4ae0-92b9-6e71461473d5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.351033 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.354612 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpqpw\" (UniqueName: \"kubernetes.io/projected/d9dd7155-a814-4ae0-92b9-6e71461473d5-kube-api-access-dpqpw\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.357144 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.358319 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.360413 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.360738 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.360930 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-9hv7p" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.381078 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.431004 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhjxd\" (UniqueName: \"kubernetes.io/projected/fd0a010e-64af-4552-8098-747bf5644c3c-kube-api-access-mhjxd\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.431080 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd0a010e-64af-4552-8098-747bf5644c3c-kolla-config\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.431142 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fd0a010e-64af-4552-8098-747bf5644c3c-config-data\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.431223 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0a010e-64af-4552-8098-747bf5644c3c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.431249 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0a010e-64af-4552-8098-747bf5644c3c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.502658 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.532745 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0a010e-64af-4552-8098-747bf5644c3c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.532797 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0a010e-64af-4552-8098-747bf5644c3c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.532879 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhjxd\" (UniqueName: \"kubernetes.io/projected/fd0a010e-64af-4552-8098-747bf5644c3c-kube-api-access-mhjxd\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.532911 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd0a010e-64af-4552-8098-747bf5644c3c-kolla-config\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.532955 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fd0a010e-64af-4552-8098-747bf5644c3c-config-data\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.533664 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fd0a010e-64af-4552-8098-747bf5644c3c-config-data\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.533801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd0a010e-64af-4552-8098-747bf5644c3c-kolla-config\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.536296 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0a010e-64af-4552-8098-747bf5644c3c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.538042 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0a010e-64af-4552-8098-747bf5644c3c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.552106 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhjxd\" (UniqueName: \"kubernetes.io/projected/fd0a010e-64af-4552-8098-747bf5644c3c-kube-api-access-mhjxd\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.722512 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 19:51:46 crc kubenswrapper[4932]: I0218 19:51:46.866890 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:51:46 crc kubenswrapper[4932]: I0218 19:51:46.867806 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:51:46 crc kubenswrapper[4932]: I0218 19:51:46.869383 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-dtbf6" Feb 18 19:51:46 crc kubenswrapper[4932]: I0218 19:51:46.880127 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:51:46 crc kubenswrapper[4932]: I0218 19:51:46.970381 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlzzn\" (UniqueName: \"kubernetes.io/projected/bf2c7a4b-b600-48af-8081-cbb3c729223f-kube-api-access-hlzzn\") pod \"kube-state-metrics-0\" (UID: \"bf2c7a4b-b600-48af-8081-cbb3c729223f\") " pod="openstack/kube-state-metrics-0" Feb 18 19:51:47 crc kubenswrapper[4932]: I0218 19:51:47.073125 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlzzn\" (UniqueName: \"kubernetes.io/projected/bf2c7a4b-b600-48af-8081-cbb3c729223f-kube-api-access-hlzzn\") pod \"kube-state-metrics-0\" (UID: \"bf2c7a4b-b600-48af-8081-cbb3c729223f\") " pod="openstack/kube-state-metrics-0" Feb 18 19:51:47 crc kubenswrapper[4932]: I0218 19:51:47.091206 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlzzn\" (UniqueName: \"kubernetes.io/projected/bf2c7a4b-b600-48af-8081-cbb3c729223f-kube-api-access-hlzzn\") pod \"kube-state-metrics-0\" (UID: \"bf2c7a4b-b600-48af-8081-cbb3c729223f\") " pod="openstack/kube-state-metrics-0" Feb 18 19:51:47 crc kubenswrapper[4932]: I0218 19:51:47.186108 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.172897 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.175124 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.179524 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.179543 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.181772 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.182312 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.182445 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-5jcnf" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.183579 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.188260 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.190669 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.190720 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.190746 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.190770 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.190961 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.191001 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.191035 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnvgq\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-kube-api-access-cnvgq\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.191069 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.191086 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.191483 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.197942 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.207258 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.294302 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.294379 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.294868 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.294900 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.294924 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.294954 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.294981 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnvgq\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-kube-api-access-cnvgq\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.295013 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.295030 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.295076 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.296474 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.296572 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.296682 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.297799 4932 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.297842 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e039419306e79ade7652e80c67474011a5658585fd3b39d0b236ffa94ab5d0db/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.299563 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.299581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.299724 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.300690 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.307420 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.318694 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnvgq\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-kube-api-access-cnvgq\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.338869 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.495161 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.714666 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-99qbh"] Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.716464 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.721353 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4q9hb" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.721942 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.726064 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.738225 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-lvg9q"] Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.740709 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.755278 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-99qbh"] Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.787672 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lvg9q"] Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.825125 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/039d44bb-1ad0-4916-8ef2-3cece4829506-ovn-controller-tls-certs\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.825226 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr894\" (UniqueName: \"kubernetes.io/projected/039d44bb-1ad0-4916-8ef2-3cece4829506-kube-api-access-vr894\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.825360 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/039d44bb-1ad0-4916-8ef2-3cece4829506-combined-ca-bundle\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.825411 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-run-ovn\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.825547 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-run\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.825722 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-log-ovn\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.825780 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/039d44bb-1ad0-4916-8ef2-3cece4829506-scripts\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.927967 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-run\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928055 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-lib\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928229 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-etc-ovs\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928404 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca19a8de-2aaf-459e-bfcd-d73a819558b0-scripts\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928468 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/039d44bb-1ad0-4916-8ef2-3cece4829506-ovn-controller-tls-certs\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928572 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6nt\" (UniqueName: \"kubernetes.io/projected/ca19a8de-2aaf-459e-bfcd-d73a819558b0-kube-api-access-9r6nt\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928626 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr894\" (UniqueName: \"kubernetes.io/projected/039d44bb-1ad0-4916-8ef2-3cece4829506-kube-api-access-vr894\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928676 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/039d44bb-1ad0-4916-8ef2-3cece4829506-combined-ca-bundle\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928715 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-run-ovn\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928796 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-run\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928876 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-log-ovn\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928917 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-log\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928974 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/039d44bb-1ad0-4916-8ef2-3cece4829506-scripts\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.930482 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-run\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.930548 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-run-ovn\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.930780 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-log-ovn\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.934470 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/039d44bb-1ad0-4916-8ef2-3cece4829506-scripts\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.936338 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/039d44bb-1ad0-4916-8ef2-3cece4829506-ovn-controller-tls-certs\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.936930 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/039d44bb-1ad0-4916-8ef2-3cece4829506-combined-ca-bundle\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.949151 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr894\" (UniqueName: \"kubernetes.io/projected/039d44bb-1ad0-4916-8ef2-3cece4829506-kube-api-access-vr894\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.029848 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r6nt\" (UniqueName: \"kubernetes.io/projected/ca19a8de-2aaf-459e-bfcd-d73a819558b0-kube-api-access-9r6nt\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.030131 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-log\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.030162 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-run\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.030223 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-lib\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.030244 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-etc-ovs\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.030258 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca19a8de-2aaf-459e-bfcd-d73a819558b0-scripts\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.030532 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-run\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.031008 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-log\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.031362 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-lib\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.031670 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-etc-ovs\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.032446 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca19a8de-2aaf-459e-bfcd-d73a819558b0-scripts\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.034785 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-99qbh" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.046733 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.049197 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r6nt\" (UniqueName: \"kubernetes.io/projected/ca19a8de-2aaf-459e-bfcd-d73a819558b0-kube-api-access-9r6nt\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.063462 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.070246 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.070248 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.070424 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.070620 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.070717 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wqcp9" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.070736 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.091614 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.233734 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.233792 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09fcfda8-434e-4759-81cc-47304cbbe9d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.233833 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z558f\" (UniqueName: \"kubernetes.io/projected/09fcfda8-434e-4759-81cc-47304cbbe9d3-kube-api-access-z558f\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.233857 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.234387 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.234553 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09fcfda8-434e-4759-81cc-47304cbbe9d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.234618 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.234770 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09fcfda8-434e-4759-81cc-47304cbbe9d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.335933 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09fcfda8-434e-4759-81cc-47304cbbe9d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336006 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336025 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09fcfda8-434e-4759-81cc-47304cbbe9d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336091 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z558f\" (UniqueName: \"kubernetes.io/projected/09fcfda8-434e-4759-81cc-47304cbbe9d3-kube-api-access-z558f\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336482 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09fcfda8-434e-4759-81cc-47304cbbe9d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336887 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336943 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336967 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09fcfda8-434e-4759-81cc-47304cbbe9d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336986 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.337294 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09fcfda8-434e-4759-81cc-47304cbbe9d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.337917 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.338142 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09fcfda8-434e-4759-81cc-47304cbbe9d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.347608 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.358320 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.358876 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.378243 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z558f\" (UniqueName: \"kubernetes.io/projected/09fcfda8-434e-4759-81cc-47304cbbe9d3-kube-api-access-z558f\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.383854 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.393847 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.396105 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.397862 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.402796 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.403030 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.403613 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.403695 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mtm2x" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.410282 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.504806 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.504886 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9432af7-4713-4805-b822-efcb8b1fb21d-config\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.504923 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9432af7-4713-4805-b822-efcb8b1fb21d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.504954 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phcvk\" (UniqueName: \"kubernetes.io/projected/f9432af7-4713-4805-b822-efcb8b1fb21d-kube-api-access-phcvk\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.504978 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.504999 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9432af7-4713-4805-b822-efcb8b1fb21d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.505018 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.505037 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606370 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9432af7-4713-4805-b822-efcb8b1fb21d-config\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606430 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9432af7-4713-4805-b822-efcb8b1fb21d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606492 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phcvk\" (UniqueName: \"kubernetes.io/projected/f9432af7-4713-4805-b822-efcb8b1fb21d-kube-api-access-phcvk\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606516 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606537 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9432af7-4713-4805-b822-efcb8b1fb21d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606559 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606579 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606621 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606968 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.607960 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9432af7-4713-4805-b822-efcb8b1fb21d-config\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.608386 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9432af7-4713-4805-b822-efcb8b1fb21d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.609306 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9432af7-4713-4805-b822-efcb8b1fb21d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.614233 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.615725 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.616693 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.630655 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phcvk\" (UniqueName: \"kubernetes.io/projected/f9432af7-4713-4805-b822-efcb8b1fb21d-kube-api-access-phcvk\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.638217 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.729788 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.754068 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.754463 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.754767 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.58:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4k8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5d46db5bb7-js9zs_openstack(fccb0fa8-b88d-469c-b88e-838aa9f5d481): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.756061 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" podUID="fccb0fa8-b88d-469c-b88e-838aa9f5d481" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.785655 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.785716 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.785821 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.58:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jxps7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-59c78cff8f-mnmbx_openstack(9bac8c90-8ad0-4e01-8434-92f4bc659e1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.787009 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" podUID="9bac8c90-8ad0-4e01-8434-92f4bc659e1d" Feb 18 19:51:56 crc kubenswrapper[4932]: I0218 19:51:56.836320 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:51:56 crc kubenswrapper[4932]: I0218 19:51:56.843593 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 18 19:51:56 crc kubenswrapper[4932]: I0218 19:51:56.852579 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57dc99974f-qvkx9"] Feb 18 19:51:56 crc kubenswrapper[4932]: I0218 19:51:56.879194 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:51:56 crc kubenswrapper[4932]: W0218 19:51:56.886127 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod915c727d_cb48_4649_bd71_30a5edf798d5.slice/crio-f6b8ae4f866fc5f739dc311d0f879722297bf27b82ef79f6e5b40fcd9f3981b9 WatchSource:0}: Error finding container f6b8ae4f866fc5f739dc311d0f879722297bf27b82ef79f6e5b40fcd9f3981b9: Status 404 returned error can't find the container with id f6b8ae4f866fc5f739dc311d0f879722297bf27b82ef79f6e5b40fcd9f3981b9 Feb 18 19:51:56 crc kubenswrapper[4932]: I0218 19:51:56.887165 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:51:56 crc kubenswrapper[4932]: I0218 19:51:56.928355 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:56 crc kubenswrapper[4932]: I0218 19:51:56.934485 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.052758 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4k8g\" (UniqueName: \"kubernetes.io/projected/fccb0fa8-b88d-469c-b88e-838aa9f5d481-kube-api-access-b4k8g\") pod \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.052855 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxps7\" (UniqueName: \"kubernetes.io/projected/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-kube-api-access-jxps7\") pod \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.052916 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fccb0fa8-b88d-469c-b88e-838aa9f5d481-config\") pod \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.053623 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fccb0fa8-b88d-469c-b88e-838aa9f5d481-config" (OuterVolumeSpecName: "config") pod "fccb0fa8-b88d-469c-b88e-838aa9f5d481" (UID: "fccb0fa8-b88d-469c-b88e-838aa9f5d481"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.053689 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-dns-svc\") pod \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.053719 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9bac8c90-8ad0-4e01-8434-92f4bc659e1d" (UID: "9bac8c90-8ad0-4e01-8434-92f4bc659e1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.053766 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-config\") pod \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.054410 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-config" (OuterVolumeSpecName: "config") pod "9bac8c90-8ad0-4e01-8434-92f4bc659e1d" (UID: "9bac8c90-8ad0-4e01-8434-92f4bc659e1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.054687 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.054705 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fccb0fa8-b88d-469c-b88e-838aa9f5d481-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.054716 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.088028 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-kube-api-access-jxps7" (OuterVolumeSpecName: "kube-api-access-jxps7") pod "9bac8c90-8ad0-4e01-8434-92f4bc659e1d" (UID: "9bac8c90-8ad0-4e01-8434-92f4bc659e1d"). InnerVolumeSpecName "kube-api-access-jxps7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.088998 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fccb0fa8-b88d-469c-b88e-838aa9f5d481-kube-api-access-b4k8g" (OuterVolumeSpecName: "kube-api-access-b4k8g") pod "fccb0fa8-b88d-469c-b88e-838aa9f5d481" (UID: "fccb0fa8-b88d-469c-b88e-838aa9f5d481"). InnerVolumeSpecName "kube-api-access-b4k8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.159229 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxps7\" (UniqueName: \"kubernetes.io/projected/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-kube-api-access-jxps7\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.159538 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4k8g\" (UniqueName: \"kubernetes.io/projected/fccb0fa8-b88d-469c-b88e-838aa9f5d481-kube-api-access-b4k8g\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.294021 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d9dd7155-a814-4ae0-92b9-6e71461473d5","Type":"ContainerStarted","Data":"f8af9221181a397252b3223e984d42961b745e96f126e899f25a0d278531e844"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.295865 4932 generic.go:334] "Generic (PLEG): container finished" podID="ab397921-9519-48e8-a5c0-5c388d54b6cd" containerID="7519d416745f73fbe57ab60a5e4c59b970e555510b0fcbd5d3f5ac822320b937" exitCode=0 Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.295965 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" event={"ID":"ab397921-9519-48e8-a5c0-5c388d54b6cd","Type":"ContainerDied","Data":"7519d416745f73fbe57ab60a5e4c59b970e555510b0fcbd5d3f5ac822320b937"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.296022 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" event={"ID":"ab397921-9519-48e8-a5c0-5c388d54b6cd","Type":"ContainerStarted","Data":"fb40657aa3dcb246fdc0a993fa98fd739b898555df945a722c59ed513a24340b"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.299062 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"915c727d-cb48-4649-bd71-30a5edf798d5","Type":"ContainerStarted","Data":"f6b8ae4f866fc5f739dc311d0f879722297bf27b82ef79f6e5b40fcd9f3981b9"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.300294 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"4a133994-7b33-4db4-a923-5b90d51e47b9","Type":"ContainerStarted","Data":"8d6ecdb333ce753d501f234162033571ca4cf78773d9a86903cada2e21a8d576"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.301712 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" event={"ID":"fccb0fa8-b88d-469c-b88e-838aa9f5d481","Type":"ContainerDied","Data":"46532bab9d5422ae97530391ba7e12cbc323bc5e7eec881c2be6645f3ff80478"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.301779 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.304339 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd547864-4d03-45ae-8bb1-10a360d36599","Type":"ContainerStarted","Data":"df1d9be37e083e5a4584427f91148d70b49af32f754e3fd54a2d761cb7b0f9e2"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.306969 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" event={"ID":"9bac8c90-8ad0-4e01-8434-92f4bc659e1d","Type":"ContainerDied","Data":"2d0b3b915f083e7565ab24eb11a22c59b965e4e0d82849dbc4f9e1e4e3b64b15"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.307057 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.340702 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9746b6c-vpbf8"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.348452 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-668d7c8657-fkpfr"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.356809 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.394909 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d46db5bb7-js9zs"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.424228 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d46db5bb7-js9zs"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.561584 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.566865 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 19:51:57 crc kubenswrapper[4932]: W0218 19:51:57.578008 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9432af7_4713_4805_b822_efcb8b1fb21d.slice/crio-30082970363e00b57a3f0b32e4858a7ca799127c28559e709c0b302d7d54967e WatchSource:0}: Error finding container 30082970363e00b57a3f0b32e4858a7ca799127c28559e709c0b302d7d54967e: Status 404 returned error can't find the container with id 30082970363e00b57a3f0b32e4858a7ca799127c28559e709c0b302d7d54967e Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.606893 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.606949 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.607627 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c78cff8f-mnmbx"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.644687 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.670106 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-99qbh"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.691077 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59c78cff8f-mnmbx"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.696542 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.701231 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.744093 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.896431 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-config\") pod \"ab397921-9519-48e8-a5c0-5c388d54b6cd\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.896611 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j8m6\" (UniqueName: \"kubernetes.io/projected/ab397921-9519-48e8-a5c0-5c388d54b6cd-kube-api-access-5j8m6\") pod \"ab397921-9519-48e8-a5c0-5c388d54b6cd\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.896701 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-dns-svc\") pod \"ab397921-9519-48e8-a5c0-5c388d54b6cd\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.901238 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab397921-9519-48e8-a5c0-5c388d54b6cd-kube-api-access-5j8m6" (OuterVolumeSpecName: "kube-api-access-5j8m6") pod "ab397921-9519-48e8-a5c0-5c388d54b6cd" (UID: "ab397921-9519-48e8-a5c0-5c388d54b6cd"). InnerVolumeSpecName "kube-api-access-5j8m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.913950 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-config" (OuterVolumeSpecName: "config") pod "ab397921-9519-48e8-a5c0-5c388d54b6cd" (UID: "ab397921-9519-48e8-a5c0-5c388d54b6cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.930624 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab397921-9519-48e8-a5c0-5c388d54b6cd" (UID: "ab397921-9519-48e8-a5c0-5c388d54b6cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.998676 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j8m6\" (UniqueName: \"kubernetes.io/projected/ab397921-9519-48e8-a5c0-5c388d54b6cd-kube-api-access-5j8m6\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.998743 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.998757 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.320493 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lvg9q"] Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.321969 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-99qbh" event={"ID":"039d44bb-1ad0-4916-8ef2-3cece4829506","Type":"ContainerStarted","Data":"bd406f9c4241cef51213ac7d66da73fb59f360c39b3da8dfddb80bee7a503913"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.323629 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"09fcfda8-434e-4759-81cc-47304cbbe9d3","Type":"ContainerStarted","Data":"15d43872fee12464bfed6c60d36e086eb12f618ed87aa39d9d79903e12aed140"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.325666 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bf2c7a4b-b600-48af-8081-cbb3c729223f","Type":"ContainerStarted","Data":"e47d3e77ce83e6731fdca0338e3764007d631b786a20a291b2d3ac30da1a2204"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.327773 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" event={"ID":"ab397921-9519-48e8-a5c0-5c388d54b6cd","Type":"ContainerDied","Data":"fb40657aa3dcb246fdc0a993fa98fd739b898555df945a722c59ed513a24340b"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.327803 4932 scope.go:117] "RemoveContainer" containerID="7519d416745f73fbe57ab60a5e4c59b970e555510b0fcbd5d3f5ac822320b937" Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.327810 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.329474 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c","Type":"ContainerStarted","Data":"080ccaf3edee131274523286f1e1cdf3b8aebb0e277f6e516ffc7e73a0cc72c7"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.331526 4932 generic.go:334] "Generic (PLEG): container finished" podID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerID="fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68" exitCode=0 Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.331569 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" event={"ID":"ca226f67-28b6-4585-a6ed-7d4394cc2a15","Type":"ContainerDied","Data":"fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.331584 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" event={"ID":"ca226f67-28b6-4585-a6ed-7d4394cc2a15","Type":"ContainerStarted","Data":"43daca4777cee280f31b3e73b817f441991f4957de8e06f5e125fa3c6e27e74a"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.332696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerStarted","Data":"c079ef0a75a184583fc3bcc63484ddbcd7e9466dbb03675318140b785c3f7c07"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.333947 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f9432af7-4713-4805-b822-efcb8b1fb21d","Type":"ContainerStarted","Data":"30082970363e00b57a3f0b32e4858a7ca799127c28559e709c0b302d7d54967e"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.336487 4932 generic.go:334] "Generic (PLEG): container finished" podID="7182e8ba-c70f-44ce-b628-21107829cb83" containerID="21d90ef666981de2a2798c5a9811496799691c81e9c63553c393f18b1c049e7d" exitCode=0 Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.336602 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" event={"ID":"7182e8ba-c70f-44ce-b628-21107829cb83","Type":"ContainerDied","Data":"21d90ef666981de2a2798c5a9811496799691c81e9c63553c393f18b1c049e7d"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.336624 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" event={"ID":"7182e8ba-c70f-44ce-b628-21107829cb83","Type":"ContainerStarted","Data":"4397e01bf815b57b3796c200c92c0f185a71fad8907ccac4ea649586543a8255"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.339137 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fd0a010e-64af-4552-8098-747bf5644c3c","Type":"ContainerStarted","Data":"ccbbd8ff9c2d845a9b9d448c89fd5bd234f92bdd9b5820d334f83c200c320aeb"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.431369 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57dc99974f-qvkx9"] Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.437344 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57dc99974f-qvkx9"] Feb 18 19:51:59 crc kubenswrapper[4932]: I0218 19:51:59.192487 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bac8c90-8ad0-4e01-8434-92f4bc659e1d" path="/var/lib/kubelet/pods/9bac8c90-8ad0-4e01-8434-92f4bc659e1d/volumes" Feb 18 19:51:59 crc kubenswrapper[4932]: I0218 19:51:59.193119 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab397921-9519-48e8-a5c0-5c388d54b6cd" path="/var/lib/kubelet/pods/ab397921-9519-48e8-a5c0-5c388d54b6cd/volumes" Feb 18 19:51:59 crc kubenswrapper[4932]: I0218 19:51:59.193649 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fccb0fa8-b88d-469c-b88e-838aa9f5d481" path="/var/lib/kubelet/pods/fccb0fa8-b88d-469c-b88e-838aa9f5d481/volumes" Feb 18 19:51:59 crc kubenswrapper[4932]: I0218 19:51:59.351879 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lvg9q" event={"ID":"ca19a8de-2aaf-459e-bfcd-d73a819558b0","Type":"ContainerStarted","Data":"1ec0c920417917a1897259352c1ea8c97c2c31eb493540b5918d1e88980afcef"} Feb 18 19:52:05 crc kubenswrapper[4932]: I0218 19:52:05.430148 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" event={"ID":"ca226f67-28b6-4585-a6ed-7d4394cc2a15","Type":"ContainerStarted","Data":"f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b"} Feb 18 19:52:05 crc kubenswrapper[4932]: I0218 19:52:05.430709 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:52:05 crc kubenswrapper[4932]: I0218 19:52:05.455439 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" podStartSLOduration=26.455415482 podStartE2EDuration="26.455415482s" podCreationTimestamp="2026-02-18 19:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:05.45047181 +0000 UTC m=+1089.032426655" watchObservedRunningTime="2026-02-18 19:52:05.455415482 +0000 UTC m=+1089.037370327" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.439746 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bf2c7a4b-b600-48af-8081-cbb3c729223f","Type":"ContainerStarted","Data":"705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.440259 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.442918 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" event={"ID":"7182e8ba-c70f-44ce-b628-21107829cb83","Type":"ContainerStarted","Data":"0f7421be32fb94af35c77032e6ff053b63a9d0e9743ff1d49b4fdfb654ec47b1"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.443323 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.445325 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fd0a010e-64af-4552-8098-747bf5644c3c","Type":"ContainerStarted","Data":"e06fac0e9c835022a5975f254d96d752a99a4fc07ffeaa18abdcddf486beeac5"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.445476 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.447444 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-99qbh" event={"ID":"039d44bb-1ad0-4916-8ef2-3cece4829506","Type":"ContainerStarted","Data":"9ae0c69c90fd66c6cded06757075ca9d6936468e1ea8ee5d08da3677fd8f054b"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.447563 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-99qbh" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.451977 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"09fcfda8-434e-4759-81cc-47304cbbe9d3","Type":"ContainerStarted","Data":"2db0d0904c63721aa21a520d6b3ee4ba67d3afb109bbe34b6940fff392794d1b"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.455493 4932 generic.go:334] "Generic (PLEG): container finished" podID="ca19a8de-2aaf-459e-bfcd-d73a819558b0" containerID="d2fb04b0f491285b17c4c9db5b62180a469fec03213c24b5e7175dfbc5dc620e" exitCode=0 Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.455568 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lvg9q" event={"ID":"ca19a8de-2aaf-459e-bfcd-d73a819558b0","Type":"ContainerDied","Data":"d2fb04b0f491285b17c4c9db5b62180a469fec03213c24b5e7175dfbc5dc620e"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.459493 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"915c727d-cb48-4649-bd71-30a5edf798d5","Type":"ContainerStarted","Data":"ab4a1ef53fc46a73ae038e80a0b9f945d129feb61eaa7b9bb7cc45f2dc7ef05f"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.464257 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.60062522 podStartE2EDuration="20.464231243s" podCreationTimestamp="2026-02-18 19:51:46 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.490416571 +0000 UTC m=+1081.072371416" lastFinishedPulling="2026-02-18 19:52:05.354022594 +0000 UTC m=+1088.935977439" observedRunningTime="2026-02-18 19:52:06.452993066 +0000 UTC m=+1090.034947911" watchObservedRunningTime="2026-02-18 19:52:06.464231243 +0000 UTC m=+1090.046186118" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.465680 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d9dd7155-a814-4ae0-92b9-6e71461473d5","Type":"ContainerStarted","Data":"6b00a32228a57fd318a20db1c61c54b5b976211c74483d59bcbd1b6e0cb1c8ff"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.469846 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f9432af7-4713-4805-b822-efcb8b1fb21d","Type":"ContainerStarted","Data":"983306ad45c7bd8aa27aa3c06c2ab4016ec1762a2f201370eef6e39173d64ffe"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.493691 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-99qbh" podStartSLOduration=10.59558404 podStartE2EDuration="17.493663528s" podCreationTimestamp="2026-02-18 19:51:49 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.486798692 +0000 UTC m=+1081.068753537" lastFinishedPulling="2026-02-18 19:52:04.38487818 +0000 UTC m=+1087.966833025" observedRunningTime="2026-02-18 19:52:06.483144019 +0000 UTC m=+1090.065098874" watchObservedRunningTime="2026-02-18 19:52:06.493663528 +0000 UTC m=+1090.075618383" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.506442 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" podStartSLOduration=26.506423503 podStartE2EDuration="26.506423503s" podCreationTimestamp="2026-02-18 19:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:06.497540954 +0000 UTC m=+1090.079495829" watchObservedRunningTime="2026-02-18 19:52:06.506423503 +0000 UTC m=+1090.088378348" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.533873 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.812928744 podStartE2EDuration="22.533851878s" podCreationTimestamp="2026-02-18 19:51:44 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.488375521 +0000 UTC m=+1081.070330366" lastFinishedPulling="2026-02-18 19:52:04.209298645 +0000 UTC m=+1087.791253500" observedRunningTime="2026-02-18 19:52:06.518385077 +0000 UTC m=+1090.100339922" watchObservedRunningTime="2026-02-18 19:52:06.533851878 +0000 UTC m=+1090.115806733" Feb 18 19:52:07 crc kubenswrapper[4932]: I0218 19:52:07.480079 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"4a133994-7b33-4db4-a923-5b90d51e47b9","Type":"ContainerStarted","Data":"f215ae9fbece324e9bd723a56e0e71d31c81d0090f9fea3975b162ab4d64e974"} Feb 18 19:52:07 crc kubenswrapper[4932]: I0218 19:52:07.483770 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lvg9q" event={"ID":"ca19a8de-2aaf-459e-bfcd-d73a819558b0","Type":"ContainerStarted","Data":"0ec488814f1030f7573530fc2f4023391364e9ba2569befe8796d3527f51d952"} Feb 18 19:52:07 crc kubenswrapper[4932]: I0218 19:52:07.488909 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd547864-4d03-45ae-8bb1-10a360d36599","Type":"ContainerStarted","Data":"7410562445bbd85ecddd8f8fa1c64974cd82f5bccf5b814dba01368f2c897a68"} Feb 18 19:52:07 crc kubenswrapper[4932]: I0218 19:52:07.492287 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c","Type":"ContainerStarted","Data":"9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d"} Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.503730 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"09fcfda8-434e-4759-81cc-47304cbbe9d3","Type":"ContainerStarted","Data":"29a4673339d988e2170beffb22f8b3b5cea2d6bfb31b090d279613c44a90caec"} Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.505552 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerStarted","Data":"e180b06fd671083f79001b0061a303617d1566914909b796e6dc37109bc742cf"} Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.508298 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lvg9q" event={"ID":"ca19a8de-2aaf-459e-bfcd-d73a819558b0","Type":"ContainerStarted","Data":"b7decf7f9f198ad146d800a055d0acbf836193fadb8fcea920110be23346445d"} Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.508572 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.508803 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.510327 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f9432af7-4713-4805-b822-efcb8b1fb21d","Type":"ContainerStarted","Data":"0e5edbbdc7dc9506430e0fb5a39f4970d5b46e1fee4aea0a909a6b7d12a0a541"} Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.525273 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.215887081 podStartE2EDuration="19.525258864s" podCreationTimestamp="2026-02-18 19:51:49 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.490242437 +0000 UTC m=+1081.072197272" lastFinishedPulling="2026-02-18 19:52:07.79961422 +0000 UTC m=+1091.381569055" observedRunningTime="2026-02-18 19:52:08.520804025 +0000 UTC m=+1092.102758890" watchObservedRunningTime="2026-02-18 19:52:08.525258864 +0000 UTC m=+1092.107213709" Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.552041 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-lvg9q" podStartSLOduration=14.352921679 podStartE2EDuration="19.552013744s" podCreationTimestamp="2026-02-18 19:51:49 +0000 UTC" firstStartedPulling="2026-02-18 19:51:59.109643089 +0000 UTC m=+1082.691597944" lastFinishedPulling="2026-02-18 19:52:04.308735164 +0000 UTC m=+1087.890690009" observedRunningTime="2026-02-18 19:52:08.541346131 +0000 UTC m=+1092.123300986" watchObservedRunningTime="2026-02-18 19:52:08.552013744 +0000 UTC m=+1092.133968619" Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.592511 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.378848186 podStartE2EDuration="15.592497201s" podCreationTimestamp="2026-02-18 19:51:53 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.580727886 +0000 UTC m=+1081.162682731" lastFinishedPulling="2026-02-18 19:52:07.794376891 +0000 UTC m=+1091.376331746" observedRunningTime="2026-02-18 19:52:08.585545 +0000 UTC m=+1092.167499845" watchObservedRunningTime="2026-02-18 19:52:08.592497201 +0000 UTC m=+1092.174452046" Feb 18 19:52:09 crc kubenswrapper[4932]: I0218 19:52:09.730521 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 18 19:52:09 crc kubenswrapper[4932]: I0218 19:52:09.731069 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 18 19:52:09 crc kubenswrapper[4932]: I0218 19:52:09.774222 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.248332 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.394873 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.569346 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.623335 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.687075 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9746b6c-vpbf8"] Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.687766 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" podUID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerName="dnsmasq-dns" containerID="cri-o://f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b" gracePeriod=10 Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.802413 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59f5bc659f-6cmgn"] Feb 18 19:52:10 crc kubenswrapper[4932]: E0218 19:52:10.804890 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab397921-9519-48e8-a5c0-5c388d54b6cd" containerName="init" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.805024 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab397921-9519-48e8-a5c0-5c388d54b6cd" containerName="init" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.805327 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab397921-9519-48e8-a5c0-5c388d54b6cd" containerName="init" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.806476 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.809603 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.842620 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59f5bc659f-6cmgn"] Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.937201 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-dns-svc\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.937250 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpjjp\" (UniqueName: \"kubernetes.io/projected/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-kube-api-access-rpjjp\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.937277 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-config\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.937421 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-ovsdbserver-sb\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.998028 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dhv68"] Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.999347 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.001996 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.031199 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dhv68"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.042471 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-dns-svc\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.042545 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpjjp\" (UniqueName: \"kubernetes.io/projected/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-kube-api-access-rpjjp\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.042594 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-config\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.042640 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-ovsdbserver-sb\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.043680 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-ovsdbserver-sb\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.044348 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-dns-svc\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.049333 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-config\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.071575 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpjjp\" (UniqueName: \"kubernetes.io/projected/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-kube-api-access-rpjjp\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.144232 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abfc229-97bd-4301-aeca-808c88209da4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.144291 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgww2\" (UniqueName: \"kubernetes.io/projected/8abfc229-97bd-4301-aeca-808c88209da4-kube-api-access-hgww2\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.144313 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abfc229-97bd-4301-aeca-808c88209da4-combined-ca-bundle\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.144353 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8abfc229-97bd-4301-aeca-808c88209da4-config\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.144378 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8abfc229-97bd-4301-aeca-808c88209da4-ovs-rundir\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.144417 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8abfc229-97bd-4301-aeca-808c88209da4-ovn-rundir\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.149147 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.246073 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abfc229-97bd-4301-aeca-808c88209da4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.246415 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgww2\" (UniqueName: \"kubernetes.io/projected/8abfc229-97bd-4301-aeca-808c88209da4-kube-api-access-hgww2\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.246436 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abfc229-97bd-4301-aeca-808c88209da4-combined-ca-bundle\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.246789 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8abfc229-97bd-4301-aeca-808c88209da4-config\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.246828 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8abfc229-97bd-4301-aeca-808c88209da4-ovs-rundir\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.246901 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8abfc229-97bd-4301-aeca-808c88209da4-ovn-rundir\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.247213 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8abfc229-97bd-4301-aeca-808c88209da4-ovn-rundir\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.247897 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8abfc229-97bd-4301-aeca-808c88209da4-config\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.247969 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8abfc229-97bd-4301-aeca-808c88209da4-ovs-rundir\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.250485 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abfc229-97bd-4301-aeca-808c88209da4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.252114 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abfc229-97bd-4301-aeca-808c88209da4-combined-ca-bundle\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.252758 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.267915 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgww2\" (UniqueName: \"kubernetes.io/projected/8abfc229-97bd-4301-aeca-808c88209da4-kube-api-access-hgww2\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.310779 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59f5bc659f-6cmgn"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.318514 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.336238 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5455f77d45-n7wh5"] Feb 18 19:52:11 crc kubenswrapper[4932]: E0218 19:52:11.336805 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerName="init" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.336839 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerName="init" Feb 18 19:52:11 crc kubenswrapper[4932]: E0218 19:52:11.336878 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerName="dnsmasq-dns" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.336889 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerName="dnsmasq-dns" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.337183 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerName="dnsmasq-dns" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.338505 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.341756 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.347851 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-dns-svc\") pod \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.347921 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5455f77d45-n7wh5"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.348012 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-config\") pod \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.348075 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgghm\" (UniqueName: \"kubernetes.io/projected/ca226f67-28b6-4585-a6ed-7d4394cc2a15-kube-api-access-jgghm\") pod \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.363291 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca226f67-28b6-4585-a6ed-7d4394cc2a15-kube-api-access-jgghm" (OuterVolumeSpecName: "kube-api-access-jgghm") pod "ca226f67-28b6-4585-a6ed-7d4394cc2a15" (UID: "ca226f67-28b6-4585-a6ed-7d4394cc2a15"). InnerVolumeSpecName "kube-api-access-jgghm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.395445 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.411238 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-config" (OuterVolumeSpecName: "config") pod "ca226f67-28b6-4585-a6ed-7d4394cc2a15" (UID: "ca226f67-28b6-4585-a6ed-7d4394cc2a15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.411644 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca226f67-28b6-4585-a6ed-7d4394cc2a15" (UID: "ca226f67-28b6-4585-a6ed-7d4394cc2a15"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.446078 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450295 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-config\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450350 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-sb\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450385 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6mhr\" (UniqueName: \"kubernetes.io/projected/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-kube-api-access-p6mhr\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450423 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-dns-svc\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450438 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-nb\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450525 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450538 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450548 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgghm\" (UniqueName: \"kubernetes.io/projected/ca226f67-28b6-4585-a6ed-7d4394cc2a15-kube-api-access-jgghm\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.537719 4932 generic.go:334] "Generic (PLEG): container finished" podID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerID="f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b" exitCode=0 Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.538031 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.538133 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" event={"ID":"ca226f67-28b6-4585-a6ed-7d4394cc2a15","Type":"ContainerDied","Data":"f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b"} Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.538158 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" event={"ID":"ca226f67-28b6-4585-a6ed-7d4394cc2a15","Type":"ContainerDied","Data":"43daca4777cee280f31b3e73b817f441991f4957de8e06f5e125fa3c6e27e74a"} Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.538947 4932 scope.go:117] "RemoveContainer" containerID="f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.555856 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-config\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.555924 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-sb\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.555969 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6mhr\" (UniqueName: \"kubernetes.io/projected/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-kube-api-access-p6mhr\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.556011 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-dns-svc\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.556028 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-nb\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.557529 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-dns-svc\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.558567 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-config\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.559127 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-nb\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.559637 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-sb\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.589133 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6mhr\" (UniqueName: \"kubernetes.io/projected/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-kube-api-access-p6mhr\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.600560 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9746b6c-vpbf8"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.603405 4932 scope.go:117] "RemoveContainer" containerID="fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.604149 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.610978 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b9746b6c-vpbf8"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.637903 4932 scope.go:117] "RemoveContainer" containerID="f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b" Feb 18 19:52:11 crc kubenswrapper[4932]: E0218 19:52:11.639654 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b\": container with ID starting with f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b not found: ID does not exist" containerID="f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.639693 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b"} err="failed to get container status \"f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b\": rpc error: code = NotFound desc = could not find container \"f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b\": container with ID starting with f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b not found: ID does not exist" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.639721 4932 scope.go:117] "RemoveContainer" containerID="fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68" Feb 18 19:52:11 crc kubenswrapper[4932]: E0218 19:52:11.640114 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68\": container with ID starting with fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68 not found: ID does not exist" containerID="fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.640165 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68"} err="failed to get container status \"fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68\": rpc error: code = NotFound desc = could not find container \"fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68\": container with ID starting with fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68 not found: ID does not exist" Feb 18 19:52:11 crc kubenswrapper[4932]: W0218 19:52:11.681921 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded4ea879_727a_4bf0_b18a_3d25d21cd31a.slice/crio-c6dc24ce9364f9c0c0feff99a0c1cf7f9c7cbf1feb5a7fedfb1ea0a4432d1046 WatchSource:0}: Error finding container c6dc24ce9364f9c0c0feff99a0c1cf7f9c7cbf1feb5a7fedfb1ea0a4432d1046: Status 404 returned error can't find the container with id c6dc24ce9364f9c0c0feff99a0c1cf7f9c7cbf1feb5a7fedfb1ea0a4432d1046 Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.682664 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59f5bc659f-6cmgn"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.687273 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.784636 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.786321 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.789512 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.789875 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-vktdm" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.790036 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.790265 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.793357 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.862979 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.864366 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.864487 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.864608 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f6fa544-8da9-4404-94a2-c5ea567caa32-scripts\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.864698 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7f6fa544-8da9-4404-94a2-c5ea567caa32-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.864782 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6fa544-8da9-4404-94a2-c5ea567caa32-config\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.864946 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59fdg\" (UniqueName: \"kubernetes.io/projected/7f6fa544-8da9-4404-94a2-c5ea567caa32-kube-api-access-59fdg\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.900369 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dhv68"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.965940 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.966000 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.966025 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.966058 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f6fa544-8da9-4404-94a2-c5ea567caa32-scripts\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.966081 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7f6fa544-8da9-4404-94a2-c5ea567caa32-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.966101 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6fa544-8da9-4404-94a2-c5ea567caa32-config\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.966147 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59fdg\" (UniqueName: \"kubernetes.io/projected/7f6fa544-8da9-4404-94a2-c5ea567caa32-kube-api-access-59fdg\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.967615 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7f6fa544-8da9-4404-94a2-c5ea567caa32-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.967804 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f6fa544-8da9-4404-94a2-c5ea567caa32-scripts\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.967876 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6fa544-8da9-4404-94a2-c5ea567caa32-config\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.970287 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.971658 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.979339 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.981428 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59fdg\" (UniqueName: \"kubernetes.io/projected/7f6fa544-8da9-4404-94a2-c5ea567caa32-kube-api-access-59fdg\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.063301 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.191078 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5455f77d45-n7wh5"] Feb 18 19:52:12 crc kubenswrapper[4932]: W0218 19:52:12.196942 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a9f909_edc1_4196_8a7e_8d9195ac8c0a.slice/crio-dd2b4674c44b33daae26458342de18c2079f458059fcfdba587ede237c929e79 WatchSource:0}: Error finding container dd2b4674c44b33daae26458342de18c2079f458059fcfdba587ede237c929e79: Status 404 returned error can't find the container with id dd2b4674c44b33daae26458342de18c2079f458059fcfdba587ede237c929e79 Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.497898 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:52:12 crc kubenswrapper[4932]: W0218 19:52:12.500455 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f6fa544_8da9_4404_94a2_c5ea567caa32.slice/crio-f8609eb96da20174b75c16a7d423cdd05fcfcc5272d073c967e12f6aa5b86ba2 WatchSource:0}: Error finding container f8609eb96da20174b75c16a7d423cdd05fcfcc5272d073c967e12f6aa5b86ba2: Status 404 returned error can't find the container with id f8609eb96da20174b75c16a7d423cdd05fcfcc5272d073c967e12f6aa5b86ba2 Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.546045 4932 generic.go:334] "Generic (PLEG): container finished" podID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerID="f4d207255e04261f1256876d0640f7a397c12534642315fef1d1773ac5c24dd5" exitCode=0 Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.546304 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" event={"ID":"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a","Type":"ContainerDied","Data":"f4d207255e04261f1256876d0640f7a397c12534642315fef1d1773ac5c24dd5"} Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.546358 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" event={"ID":"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a","Type":"ContainerStarted","Data":"dd2b4674c44b33daae26458342de18c2079f458059fcfdba587ede237c929e79"} Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.548681 4932 generic.go:334] "Generic (PLEG): container finished" podID="ed4ea879-727a-4bf0-b18a-3d25d21cd31a" containerID="85699d80211e1e6e06ae8a51e70269eb8105b39190aa91df9a7ef0cc4c6f3ba5" exitCode=0 Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.548742 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" event={"ID":"ed4ea879-727a-4bf0-b18a-3d25d21cd31a","Type":"ContainerDied","Data":"85699d80211e1e6e06ae8a51e70269eb8105b39190aa91df9a7ef0cc4c6f3ba5"} Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.548933 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" event={"ID":"ed4ea879-727a-4bf0-b18a-3d25d21cd31a","Type":"ContainerStarted","Data":"c6dc24ce9364f9c0c0feff99a0c1cf7f9c7cbf1feb5a7fedfb1ea0a4432d1046"} Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.554946 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7f6fa544-8da9-4404-94a2-c5ea567caa32","Type":"ContainerStarted","Data":"f8609eb96da20174b75c16a7d423cdd05fcfcc5272d073c967e12f6aa5b86ba2"} Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.557558 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dhv68" event={"ID":"8abfc229-97bd-4301-aeca-808c88209da4","Type":"ContainerStarted","Data":"8cd5bbb9abdc963fcdfa397efcc891dccf8d1b9ec75b11ff528ab2dd69c95bd3"} Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.557642 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dhv68" event={"ID":"8abfc229-97bd-4301-aeca-808c88209da4","Type":"ContainerStarted","Data":"712b10cab6f249c2fc4db604b074472bcc7fcd90b45e46959c4f67a35124d051"} Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.611951 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dhv68" podStartSLOduration=2.611900386 podStartE2EDuration="2.611900386s" podCreationTimestamp="2026-02-18 19:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:12.603213752 +0000 UTC m=+1096.185168597" watchObservedRunningTime="2026-02-18 19:52:12.611900386 +0000 UTC m=+1096.193855231" Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.982056 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.085987 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpjjp\" (UniqueName: \"kubernetes.io/projected/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-kube-api-access-rpjjp\") pod \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.086062 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-dns-svc\") pod \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.086138 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-ovsdbserver-sb\") pod \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.086743 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-config\") pod \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.093482 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-kube-api-access-rpjjp" (OuterVolumeSpecName: "kube-api-access-rpjjp") pod "ed4ea879-727a-4bf0-b18a-3d25d21cd31a" (UID: "ed4ea879-727a-4bf0-b18a-3d25d21cd31a"). InnerVolumeSpecName "kube-api-access-rpjjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.104644 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-config" (OuterVolumeSpecName: "config") pod "ed4ea879-727a-4bf0-b18a-3d25d21cd31a" (UID: "ed4ea879-727a-4bf0-b18a-3d25d21cd31a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.114614 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed4ea879-727a-4bf0-b18a-3d25d21cd31a" (UID: "ed4ea879-727a-4bf0-b18a-3d25d21cd31a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.142460 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed4ea879-727a-4bf0-b18a-3d25d21cd31a" (UID: "ed4ea879-727a-4bf0-b18a-3d25d21cd31a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.188717 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpjjp\" (UniqueName: \"kubernetes.io/projected/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-kube-api-access-rpjjp\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.189070 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.189099 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.189116 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.191774 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" path="/var/lib/kubelet/pods/ca226f67-28b6-4585-a6ed-7d4394cc2a15/volumes" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.566373 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" event={"ID":"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a","Type":"ContainerStarted","Data":"a3982f76fc3e004ac4e2e07f7087521b525063221b76493c343ca26001938a86"} Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.567331 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.569977 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.570475 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" event={"ID":"ed4ea879-727a-4bf0-b18a-3d25d21cd31a","Type":"ContainerDied","Data":"c6dc24ce9364f9c0c0feff99a0c1cf7f9c7cbf1feb5a7fedfb1ea0a4432d1046"} Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.570506 4932 scope.go:117] "RemoveContainer" containerID="85699d80211e1e6e06ae8a51e70269eb8105b39190aa91df9a7ef0cc4c6f3ba5" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.588120 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" podStartSLOduration=2.588096364 podStartE2EDuration="2.588096364s" podCreationTimestamp="2026-02-18 19:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:13.587330555 +0000 UTC m=+1097.169285420" watchObservedRunningTime="2026-02-18 19:52:13.588096364 +0000 UTC m=+1097.170051219" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.629305 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59f5bc659f-6cmgn"] Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.636109 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59f5bc659f-6cmgn"] Feb 18 19:52:14 crc kubenswrapper[4932]: I0218 19:52:14.577876 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7f6fa544-8da9-4404-94a2-c5ea567caa32","Type":"ContainerStarted","Data":"66aab2e3af98dc2f65a6dc3564fb435262e14c83f3526943ce4f75c072c3886a"} Feb 18 19:52:14 crc kubenswrapper[4932]: I0218 19:52:14.580117 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 18 19:52:14 crc kubenswrapper[4932]: I0218 19:52:14.580129 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7f6fa544-8da9-4404-94a2-c5ea567caa32","Type":"ContainerStarted","Data":"af938aaf272e82338271325c688c986bffaf4cdc3c88c8253e481cbc4c3d5cd7"} Feb 18 19:52:14 crc kubenswrapper[4932]: I0218 19:52:14.581560 4932 generic.go:334] "Generic (PLEG): container finished" podID="d9dd7155-a814-4ae0-92b9-6e71461473d5" containerID="6b00a32228a57fd318a20db1c61c54b5b976211c74483d59bcbd1b6e0cb1c8ff" exitCode=0 Feb 18 19:52:14 crc kubenswrapper[4932]: I0218 19:52:14.581634 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d9dd7155-a814-4ae0-92b9-6e71461473d5","Type":"ContainerDied","Data":"6b00a32228a57fd318a20db1c61c54b5b976211c74483d59bcbd1b6e0cb1c8ff"} Feb 18 19:52:14 crc kubenswrapper[4932]: I0218 19:52:14.608091 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.65409031 podStartE2EDuration="3.608073651s" podCreationTimestamp="2026-02-18 19:52:11 +0000 UTC" firstStartedPulling="2026-02-18 19:52:12.502792408 +0000 UTC m=+1096.084747253" lastFinishedPulling="2026-02-18 19:52:13.456775759 +0000 UTC m=+1097.038730594" observedRunningTime="2026-02-18 19:52:14.598588567 +0000 UTC m=+1098.180543412" watchObservedRunningTime="2026-02-18 19:52:14.608073651 +0000 UTC m=+1098.190028496" Feb 18 19:52:14 crc kubenswrapper[4932]: I0218 19:52:14.724690 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 18 19:52:15 crc kubenswrapper[4932]: I0218 19:52:15.192388 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed4ea879-727a-4bf0-b18a-3d25d21cd31a" path="/var/lib/kubelet/pods/ed4ea879-727a-4bf0-b18a-3d25d21cd31a/volumes" Feb 18 19:52:15 crc kubenswrapper[4932]: I0218 19:52:15.594436 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d9dd7155-a814-4ae0-92b9-6e71461473d5","Type":"ContainerStarted","Data":"8178a923b73598beccaa3903f2c974da013c4273b78c68f62efb4d7cc0fa4624"} Feb 18 19:52:15 crc kubenswrapper[4932]: I0218 19:52:15.597032 4932 generic.go:334] "Generic (PLEG): container finished" podID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerID="e180b06fd671083f79001b0061a303617d1566914909b796e6dc37109bc742cf" exitCode=0 Feb 18 19:52:15 crc kubenswrapper[4932]: I0218 19:52:15.597079 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerDied","Data":"e180b06fd671083f79001b0061a303617d1566914909b796e6dc37109bc742cf"} Feb 18 19:52:15 crc kubenswrapper[4932]: I0218 19:52:15.637017 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.973940996 podStartE2EDuration="32.636990257s" podCreationTimestamp="2026-02-18 19:51:43 +0000 UTC" firstStartedPulling="2026-02-18 19:51:56.883739597 +0000 UTC m=+1080.465694442" lastFinishedPulling="2026-02-18 19:52:04.546788818 +0000 UTC m=+1088.128743703" observedRunningTime="2026-02-18 19:52:15.626606861 +0000 UTC m=+1099.208561746" watchObservedRunningTime="2026-02-18 19:52:15.636990257 +0000 UTC m=+1099.218945142" Feb 18 19:52:16 crc kubenswrapper[4932]: I0218 19:52:16.607010 4932 generic.go:334] "Generic (PLEG): container finished" podID="915c727d-cb48-4649-bd71-30a5edf798d5" containerID="ab4a1ef53fc46a73ae038e80a0b9f945d129feb61eaa7b9bb7cc45f2dc7ef05f" exitCode=0 Feb 18 19:52:16 crc kubenswrapper[4932]: I0218 19:52:16.607078 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"915c727d-cb48-4649-bd71-30a5edf798d5","Type":"ContainerDied","Data":"ab4a1ef53fc46a73ae038e80a0b9f945d129feb61eaa7b9bb7cc45f2dc7ef05f"} Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.204060 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5455f77d45-n7wh5"] Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.204880 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" podUID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerName="dnsmasq-dns" containerID="cri-o://a3982f76fc3e004ac4e2e07f7087521b525063221b76493c343ca26001938a86" gracePeriod=10 Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.207765 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.209806 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.228638 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d589bd999-klfsc"] Feb 18 19:52:17 crc kubenswrapper[4932]: E0218 19:52:17.228975 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed4ea879-727a-4bf0-b18a-3d25d21cd31a" containerName="init" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.228990 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4ea879-727a-4bf0-b18a-3d25d21cd31a" containerName="init" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.229154 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed4ea879-727a-4bf0-b18a-3d25d21cd31a" containerName="init" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.238221 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.263038 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d589bd999-klfsc"] Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.364207 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-config\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.364261 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-nb\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.364579 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-sb\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.364613 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2b4k\" (UniqueName: \"kubernetes.io/projected/93b88bfc-e293-4af3-a085-184607bf9327-kube-api-access-j2b4k\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.364664 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-dns-svc\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.466730 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-sb\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.466852 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2b4k\" (UniqueName: \"kubernetes.io/projected/93b88bfc-e293-4af3-a085-184607bf9327-kube-api-access-j2b4k\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.466941 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-dns-svc\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.466983 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-config\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.467021 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-nb\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.467806 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-sb\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.468968 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-nb\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.469782 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-dns-svc\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.469847 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-config\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.493404 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2b4k\" (UniqueName: \"kubernetes.io/projected/93b88bfc-e293-4af3-a085-184607bf9327-kube-api-access-j2b4k\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.614491 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.617840 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"915c727d-cb48-4649-bd71-30a5edf798d5","Type":"ContainerStarted","Data":"cfffd12e2db31f1f4eed158bad3236b5de7cd7cf1024afe930a98a37d235a483"} Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.621787 4932 generic.go:334] "Generic (PLEG): container finished" podID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerID="a3982f76fc3e004ac4e2e07f7087521b525063221b76493c343ca26001938a86" exitCode=0 Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.621835 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" event={"ID":"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a","Type":"ContainerDied","Data":"a3982f76fc3e004ac4e2e07f7087521b525063221b76493c343ca26001938a86"} Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.625781 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.653731 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=29.157460115 podStartE2EDuration="36.653708277s" podCreationTimestamp="2026-02-18 19:51:41 +0000 UTC" firstStartedPulling="2026-02-18 19:51:56.888883614 +0000 UTC m=+1080.470838459" lastFinishedPulling="2026-02-18 19:52:04.385131746 +0000 UTC m=+1087.967086621" observedRunningTime="2026-02-18 19:52:17.642372678 +0000 UTC m=+1101.224327543" watchObservedRunningTime="2026-02-18 19:52:17.653708277 +0000 UTC m=+1101.235663132" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.774182 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-dns-svc\") pod \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.774342 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-nb\") pod \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.774389 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-config\") pod \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.775252 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-sb\") pod \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.775345 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6mhr\" (UniqueName: \"kubernetes.io/projected/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-kube-api-access-p6mhr\") pod \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.786383 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-kube-api-access-p6mhr" (OuterVolumeSpecName: "kube-api-access-p6mhr") pod "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" (UID: "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a"). InnerVolumeSpecName "kube-api-access-p6mhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.841037 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-config" (OuterVolumeSpecName: "config") pod "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" (UID: "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.862799 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" (UID: "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.877132 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6mhr\" (UniqueName: \"kubernetes.io/projected/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-kube-api-access-p6mhr\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.877161 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.877210 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.901574 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" (UID: "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.911581 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" (UID: "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.980490 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.980525 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.145975 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d589bd999-klfsc"] Feb 18 19:52:18 crc kubenswrapper[4932]: W0218 19:52:18.151665 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93b88bfc_e293_4af3_a085_184607bf9327.slice/crio-4479d0a19d18775cbdda9e3e29eb2fb3a08c6720c8c950eb49addc462844cb3a WatchSource:0}: Error finding container 4479d0a19d18775cbdda9e3e29eb2fb3a08c6720c8c950eb49addc462844cb3a: Status 404 returned error can't find the container with id 4479d0a19d18775cbdda9e3e29eb2fb3a08c6720c8c950eb49addc462844cb3a Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.437033 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:52:18 crc kubenswrapper[4932]: E0218 19:52:18.437628 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerName="dnsmasq-dns" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.437643 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerName="dnsmasq-dns" Feb 18 19:52:18 crc kubenswrapper[4932]: E0218 19:52:18.437672 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerName="init" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.437678 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerName="init" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.437836 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerName="dnsmasq-dns" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.446188 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.451295 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.451559 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-28sds" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.451576 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.452033 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.469138 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.595716 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-cache\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.595761 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-lock\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.595779 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.595819 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.595837 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.595888 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9dxh\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-kube-api-access-v9dxh\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.631025 4932 generic.go:334] "Generic (PLEG): container finished" podID="93b88bfc-e293-4af3-a085-184607bf9327" containerID="a93d81ac35fef706c2981873bb26c1272af93758393d8995dcc39345d9e18399" exitCode=0 Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.631202 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" event={"ID":"93b88bfc-e293-4af3-a085-184607bf9327","Type":"ContainerDied","Data":"a93d81ac35fef706c2981873bb26c1272af93758393d8995dcc39345d9e18399"} Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.632039 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" event={"ID":"93b88bfc-e293-4af3-a085-184607bf9327","Type":"ContainerStarted","Data":"4479d0a19d18775cbdda9e3e29eb2fb3a08c6720c8c950eb49addc462844cb3a"} Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.639592 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" event={"ID":"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a","Type":"ContainerDied","Data":"dd2b4674c44b33daae26458342de18c2079f458059fcfdba587ede237c929e79"} Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.639637 4932 scope.go:117] "RemoveContainer" containerID="a3982f76fc3e004ac4e2e07f7087521b525063221b76493c343ca26001938a86" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.639925 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.697378 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9dxh\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-kube-api-access-v9dxh\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.697477 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-cache\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.697501 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-lock\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.697518 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.697566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.697592 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.698071 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-lock\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.698410 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.698814 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-cache\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: E0218 19:52:18.699335 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:52:18 crc kubenswrapper[4932]: E0218 19:52:18.699348 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:52:18 crc kubenswrapper[4932]: E0218 19:52:18.699392 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift podName:c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5 nodeName:}" failed. No retries permitted until 2026-02-18 19:52:19.199377805 +0000 UTC m=+1102.781332650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift") pod "swift-storage-0" (UID: "c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5") : configmap "swift-ring-files" not found Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.711605 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.716059 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9dxh\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-kube-api-access-v9dxh\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.718494 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.835291 4932 scope.go:117] "RemoveContainer" containerID="f4d207255e04261f1256876d0640f7a397c12534642315fef1d1773ac5c24dd5" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.849602 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5455f77d45-n7wh5"] Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.863030 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5455f77d45-n7wh5"] Feb 18 19:52:19 crc kubenswrapper[4932]: I0218 19:52:19.189893 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" path="/var/lib/kubelet/pods/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a/volumes" Feb 18 19:52:19 crc kubenswrapper[4932]: I0218 19:52:19.210420 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:19 crc kubenswrapper[4932]: E0218 19:52:19.210620 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:52:19 crc kubenswrapper[4932]: E0218 19:52:19.210742 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:52:19 crc kubenswrapper[4932]: E0218 19:52:19.210803 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift podName:c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5 nodeName:}" failed. No retries permitted until 2026-02-18 19:52:20.210783918 +0000 UTC m=+1103.792738763 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift") pod "swift-storage-0" (UID: "c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5") : configmap "swift-ring-files" not found Feb 18 19:52:19 crc kubenswrapper[4932]: I0218 19:52:19.647630 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" event={"ID":"93b88bfc-e293-4af3-a085-184607bf9327","Type":"ContainerStarted","Data":"f52951b30b5592f2aeb5eae2773bb2ba20887b8705143fd09cf41ec26c0f786e"} Feb 18 19:52:19 crc kubenswrapper[4932]: I0218 19:52:19.648800 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:19 crc kubenswrapper[4932]: I0218 19:52:19.671216 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" podStartSLOduration=2.671192375 podStartE2EDuration="2.671192375s" podCreationTimestamp="2026-02-18 19:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:19.667459154 +0000 UTC m=+1103.249413999" watchObservedRunningTime="2026-02-18 19:52:19.671192375 +0000 UTC m=+1103.253147220" Feb 18 19:52:20 crc kubenswrapper[4932]: I0218 19:52:20.229011 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:20 crc kubenswrapper[4932]: E0218 19:52:20.229634 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:52:20 crc kubenswrapper[4932]: E0218 19:52:20.229749 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:52:20 crc kubenswrapper[4932]: E0218 19:52:20.229803 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift podName:c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5 nodeName:}" failed. No retries permitted until 2026-02-18 19:52:22.229788891 +0000 UTC m=+1105.811743726 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift") pod "swift-storage-0" (UID: "c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5") : configmap "swift-ring-files" not found Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.268743 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:22 crc kubenswrapper[4932]: E0218 19:52:22.268951 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:52:22 crc kubenswrapper[4932]: E0218 19:52:22.269217 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:52:22 crc kubenswrapper[4932]: E0218 19:52:22.269278 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift podName:c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5 nodeName:}" failed. No retries permitted until 2026-02-18 19:52:26.269260791 +0000 UTC m=+1109.851215636 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift") pod "swift-storage-0" (UID: "c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5") : configmap "swift-ring-files" not found Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.340452 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-sq9sk"] Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.341797 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.344564 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.344696 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.344902 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.353227 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sq9sk"] Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.474305 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-swiftconf\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.474357 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04953cd9-9de3-46b5-8b86-382b2d2291cd-etc-swift\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.474392 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-ring-data-devices\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.474619 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf5rv\" (UniqueName: \"kubernetes.io/projected/04953cd9-9de3-46b5-8b86-382b2d2291cd-kube-api-access-zf5rv\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.474695 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-dispersionconf\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.474894 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-scripts\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.475057 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-combined-ca-bundle\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.576526 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-swiftconf\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.576591 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04953cd9-9de3-46b5-8b86-382b2d2291cd-etc-swift\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.576634 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-ring-data-devices\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.576666 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf5rv\" (UniqueName: \"kubernetes.io/projected/04953cd9-9de3-46b5-8b86-382b2d2291cd-kube-api-access-zf5rv\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.576692 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-dispersionconf\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.576898 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-scripts\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.577097 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04953cd9-9de3-46b5-8b86-382b2d2291cd-etc-swift\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.577314 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-combined-ca-bundle\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.578558 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-ring-data-devices\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.578615 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-scripts\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.584676 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-swiftconf\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.592848 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-combined-ca-bundle\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.592854 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-dispersionconf\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.597279 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf5rv\" (UniqueName: \"kubernetes.io/projected/04953cd9-9de3-46b5-8b86-382b2d2291cd-kube-api-access-zf5rv\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.671749 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:23 crc kubenswrapper[4932]: W0218 19:52:23.130963 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04953cd9_9de3_46b5_8b86_382b2d2291cd.slice/crio-ed4ba0f7587a73b183dcf28620debb7555eadcf63d796c9e3aed7de82b80093c WatchSource:0}: Error finding container ed4ba0f7587a73b183dcf28620debb7555eadcf63d796c9e3aed7de82b80093c: Status 404 returned error can't find the container with id ed4ba0f7587a73b183dcf28620debb7555eadcf63d796c9e3aed7de82b80093c Feb 18 19:52:23 crc kubenswrapper[4932]: I0218 19:52:23.131913 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sq9sk"] Feb 18 19:52:23 crc kubenswrapper[4932]: I0218 19:52:23.133352 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 19:52:23 crc kubenswrapper[4932]: I0218 19:52:23.133376 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 19:52:23 crc kubenswrapper[4932]: I0218 19:52:23.683914 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sq9sk" event={"ID":"04953cd9-9de3-46b5-8b86-382b2d2291cd","Type":"ContainerStarted","Data":"ed4ba0f7587a73b183dcf28620debb7555eadcf63d796c9e3aed7de82b80093c"} Feb 18 19:52:24 crc kubenswrapper[4932]: I0218 19:52:24.503421 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 19:52:24 crc kubenswrapper[4932]: I0218 19:52:24.503491 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 19:52:25 crc kubenswrapper[4932]: I0218 19:52:25.843604 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 19:52:25 crc kubenswrapper[4932]: I0218 19:52:25.998793 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 19:52:26 crc kubenswrapper[4932]: I0218 19:52:26.365260 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:26 crc kubenswrapper[4932]: E0218 19:52:26.365507 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:52:26 crc kubenswrapper[4932]: E0218 19:52:26.365538 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:52:26 crc kubenswrapper[4932]: E0218 19:52:26.365606 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift podName:c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5 nodeName:}" failed. No retries permitted until 2026-02-18 19:52:34.36558702 +0000 UTC m=+1117.947541875 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift") pod "swift-storage-0" (UID: "c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5") : configmap "swift-ring-files" not found Feb 18 19:52:26 crc kubenswrapper[4932]: E0218 19:52:26.982543 4932 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.190:53834->38.102.83.190:41227: write tcp 38.102.83.190:53834->38.102.83.190:41227: write: broken pipe Feb 18 19:52:27 crc kubenswrapper[4932]: I0218 19:52:27.605585 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:52:27 crc kubenswrapper[4932]: I0218 19:52:27.605646 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:52:27 crc kubenswrapper[4932]: I0218 19:52:27.615908 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:27 crc kubenswrapper[4932]: I0218 19:52:27.668783 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-668d7c8657-fkpfr"] Feb 18 19:52:27 crc kubenswrapper[4932]: I0218 19:52:27.678470 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" podUID="7182e8ba-c70f-44ce-b628-21107829cb83" containerName="dnsmasq-dns" containerID="cri-o://0f7421be32fb94af35c77032e6ff053b63a9d0e9743ff1d49b4fdfb654ec47b1" gracePeriod=10 Feb 18 19:52:27 crc kubenswrapper[4932]: I0218 19:52:27.755640 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 19:52:27 crc kubenswrapper[4932]: I0218 19:52:27.870792 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 19:52:28 crc kubenswrapper[4932]: I0218 19:52:28.734394 4932 generic.go:334] "Generic (PLEG): container finished" podID="7182e8ba-c70f-44ce-b628-21107829cb83" containerID="0f7421be32fb94af35c77032e6ff053b63a9d0e9743ff1d49b4fdfb654ec47b1" exitCode=0 Feb 18 19:52:28 crc kubenswrapper[4932]: I0218 19:52:28.735192 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" event={"ID":"7182e8ba-c70f-44ce-b628-21107829cb83","Type":"ContainerDied","Data":"0f7421be32fb94af35c77032e6ff053b63a9d0e9743ff1d49b4fdfb654ec47b1"} Feb 18 19:52:29 crc kubenswrapper[4932]: E0218 19:52:29.966856 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Feb 18 19:52:29 crc kubenswrapper[4932]: E0218 19:52:29.967080 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.enable-remote-write-receiver --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnvgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(cf98dd42-289f-43fa-b4dc-c6ff814a3c25): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.292504 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.465607 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-config\") pod \"7182e8ba-c70f-44ce-b628-21107829cb83\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.465798 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t97ph\" (UniqueName: \"kubernetes.io/projected/7182e8ba-c70f-44ce-b628-21107829cb83-kube-api-access-t97ph\") pod \"7182e8ba-c70f-44ce-b628-21107829cb83\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.465866 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-dns-svc\") pod \"7182e8ba-c70f-44ce-b628-21107829cb83\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.470932 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7182e8ba-c70f-44ce-b628-21107829cb83-kube-api-access-t97ph" (OuterVolumeSpecName: "kube-api-access-t97ph") pod "7182e8ba-c70f-44ce-b628-21107829cb83" (UID: "7182e8ba-c70f-44ce-b628-21107829cb83"). InnerVolumeSpecName "kube-api-access-t97ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.504670 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7182e8ba-c70f-44ce-b628-21107829cb83" (UID: "7182e8ba-c70f-44ce-b628-21107829cb83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.507640 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-config" (OuterVolumeSpecName: "config") pod "7182e8ba-c70f-44ce-b628-21107829cb83" (UID: "7182e8ba-c70f-44ce-b628-21107829cb83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.568440 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t97ph\" (UniqueName: \"kubernetes.io/projected/7182e8ba-c70f-44ce-b628-21107829cb83-kube-api-access-t97ph\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.569536 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.569582 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.751802 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" event={"ID":"7182e8ba-c70f-44ce-b628-21107829cb83","Type":"ContainerDied","Data":"4397e01bf815b57b3796c200c92c0f185a71fad8907ccac4ea649586543a8255"} Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.751857 4932 scope.go:117] "RemoveContainer" containerID="0f7421be32fb94af35c77032e6ff053b63a9d0e9743ff1d49b4fdfb654ec47b1" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.751990 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.789498 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-668d7c8657-fkpfr"] Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.795208 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-668d7c8657-fkpfr"] Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.192225 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7182e8ba-c70f-44ce-b628-21107829cb83" path="/var/lib/kubelet/pods/7182e8ba-c70f-44ce-b628-21107829cb83/volumes" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.852204 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9pgp9"] Feb 18 19:52:31 crc kubenswrapper[4932]: E0218 19:52:31.852602 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7182e8ba-c70f-44ce-b628-21107829cb83" containerName="init" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.852621 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7182e8ba-c70f-44ce-b628-21107829cb83" containerName="init" Feb 18 19:52:31 crc kubenswrapper[4932]: E0218 19:52:31.852655 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7182e8ba-c70f-44ce-b628-21107829cb83" containerName="dnsmasq-dns" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.852662 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7182e8ba-c70f-44ce-b628-21107829cb83" containerName="dnsmasq-dns" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.852853 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7182e8ba-c70f-44ce-b628-21107829cb83" containerName="dnsmasq-dns" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.853557 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.856449 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.872048 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9pgp9"] Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.892708 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkd5t\" (UniqueName: \"kubernetes.io/projected/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-kube-api-access-rkd5t\") pod \"root-account-create-update-9pgp9\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.892833 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-operator-scripts\") pod \"root-account-create-update-9pgp9\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.994610 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkd5t\" (UniqueName: \"kubernetes.io/projected/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-kube-api-access-rkd5t\") pod \"root-account-create-update-9pgp9\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.994704 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-operator-scripts\") pod \"root-account-create-update-9pgp9\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.996010 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-operator-scripts\") pod \"root-account-create-update-9pgp9\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.017782 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkd5t\" (UniqueName: \"kubernetes.io/projected/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-kube-api-access-rkd5t\") pod \"root-account-create-update-9pgp9\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.051872 4932 scope.go:117] "RemoveContainer" containerID="21d90ef666981de2a2798c5a9811496799691c81e9c63553c393f18b1c049e7d" Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.161922 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.192434 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.685686 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9pgp9"] Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.770913 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9pgp9" event={"ID":"3bd41ee5-d385-424f-996a-b3baf7f9eb8a","Type":"ContainerStarted","Data":"5eacdf25818da33933d5d61415a3ca0021d992738e8dfeca409d5ccd0c748a39"} Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.773445 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sq9sk" event={"ID":"04953cd9-9de3-46b5-8b86-382b2d2291cd","Type":"ContainerStarted","Data":"3511d7edf13acf4b55c85650c80b80f04682a6a62d5515928313b0d0eefcc028"} Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.799398 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-sq9sk" podStartSLOduration=1.793678818 podStartE2EDuration="10.799377317s" podCreationTimestamp="2026-02-18 19:52:22 +0000 UTC" firstStartedPulling="2026-02-18 19:52:23.134970148 +0000 UTC m=+1106.716925023" lastFinishedPulling="2026-02-18 19:52:32.140668647 +0000 UTC m=+1115.722623522" observedRunningTime="2026-02-18 19:52:32.789859843 +0000 UTC m=+1116.371814698" watchObservedRunningTime="2026-02-18 19:52:32.799377317 +0000 UTC m=+1116.381332172" Feb 18 19:52:33 crc kubenswrapper[4932]: E0218 19:52:33.449310 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bd41ee5_d385_424f_996a_b3baf7f9eb8a.slice/crio-conmon-0c56a84dec06134e2f4b962a1631f1595e0dce10e33a951ccd5303bade9b2a6e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bd41ee5_d385_424f_996a_b3baf7f9eb8a.slice/crio-0c56a84dec06134e2f4b962a1631f1595e0dce10e33a951ccd5303bade9b2a6e.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:52:33 crc kubenswrapper[4932]: I0218 19:52:33.796381 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerStarted","Data":"66d84470994100b42a53acf4561ffbafa4e810bfb2c143ce053c40ae82620693"} Feb 18 19:52:33 crc kubenswrapper[4932]: I0218 19:52:33.799138 4932 generic.go:334] "Generic (PLEG): container finished" podID="3bd41ee5-d385-424f-996a-b3baf7f9eb8a" containerID="0c56a84dec06134e2f4b962a1631f1595e0dce10e33a951ccd5303bade9b2a6e" exitCode=0 Feb 18 19:52:33 crc kubenswrapper[4932]: I0218 19:52:33.800896 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9pgp9" event={"ID":"3bd41ee5-d385-424f-996a-b3baf7f9eb8a","Type":"ContainerDied","Data":"0c56a84dec06134e2f4b962a1631f1595e0dce10e33a951ccd5303bade9b2a6e"} Feb 18 19:52:34 crc kubenswrapper[4932]: I0218 19:52:34.442599 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:34 crc kubenswrapper[4932]: E0218 19:52:34.442787 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:52:34 crc kubenswrapper[4932]: E0218 19:52:34.443015 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:52:34 crc kubenswrapper[4932]: E0218 19:52:34.443089 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift podName:c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5 nodeName:}" failed. No retries permitted until 2026-02-18 19:52:50.443057611 +0000 UTC m=+1134.025012456 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift") pod "swift-storage-0" (UID: "c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5") : configmap "swift-ring-files" not found Feb 18 19:52:34 crc kubenswrapper[4932]: I0218 19:52:34.906278 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-js74w"] Feb 18 19:52:34 crc kubenswrapper[4932]: I0218 19:52:34.908721 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-js74w" Feb 18 19:52:34 crc kubenswrapper[4932]: I0218 19:52:34.915129 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-js74w"] Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.002266 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-5833-account-create-update-fxm2t"] Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.003539 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.006366 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.011157 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5833-account-create-update-fxm2t"] Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.054096 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-operator-scripts\") pod \"glance-db-create-js74w\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " pod="openstack/glance-db-create-js74w" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.054332 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd8cd\" (UniqueName: \"kubernetes.io/projected/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-kube-api-access-dd8cd\") pod \"glance-db-create-js74w\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " pod="openstack/glance-db-create-js74w" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.054397 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mkrl\" (UniqueName: \"kubernetes.io/projected/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-kube-api-access-7mkrl\") pod \"glance-5833-account-create-update-fxm2t\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.054448 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-operator-scripts\") pod \"glance-5833-account-create-update-fxm2t\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.081611 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-99qbh" podUID="039d44bb-1ad0-4916-8ef2-3cece4829506" containerName="ovn-controller" probeResult="failure" output=< Feb 18 19:52:35 crc kubenswrapper[4932]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 19:52:35 crc kubenswrapper[4932]: > Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.155937 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd8cd\" (UniqueName: \"kubernetes.io/projected/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-kube-api-access-dd8cd\") pod \"glance-db-create-js74w\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " pod="openstack/glance-db-create-js74w" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.156000 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mkrl\" (UniqueName: \"kubernetes.io/projected/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-kube-api-access-7mkrl\") pod \"glance-5833-account-create-update-fxm2t\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.156023 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-operator-scripts\") pod \"glance-5833-account-create-update-fxm2t\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.156101 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-operator-scripts\") pod \"glance-db-create-js74w\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " pod="openstack/glance-db-create-js74w" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.156839 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-operator-scripts\") pod \"glance-5833-account-create-update-fxm2t\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.157390 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-operator-scripts\") pod \"glance-db-create-js74w\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " pod="openstack/glance-db-create-js74w" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.175326 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd8cd\" (UniqueName: \"kubernetes.io/projected/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-kube-api-access-dd8cd\") pod \"glance-db-create-js74w\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " pod="openstack/glance-db-create-js74w" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.175898 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mkrl\" (UniqueName: \"kubernetes.io/projected/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-kube-api-access-7mkrl\") pod \"glance-5833-account-create-update-fxm2t\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.238206 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-js74w" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.323926 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.716626 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zhvln"] Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.717628 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.755575 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zhvln"] Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.765983 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64352a4d-f3af-44e1-b1d7-cc5e125de560-operator-scripts\") pod \"keystone-db-create-zhvln\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.766031 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4v6z\" (UniqueName: \"kubernetes.io/projected/64352a4d-f3af-44e1-b1d7-cc5e125de560-kube-api-access-q4v6z\") pod \"keystone-db-create-zhvln\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.835582 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bd21-account-create-update-kcn9v"] Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.837325 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.844830 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.847344 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9pgp9" event={"ID":"3bd41ee5-d385-424f-996a-b3baf7f9eb8a","Type":"ContainerDied","Data":"5eacdf25818da33933d5d61415a3ca0021d992738e8dfeca409d5ccd0c748a39"} Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.847382 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eacdf25818da33933d5d61415a3ca0021d992738e8dfeca409d5ccd0c748a39" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.847710 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bd21-account-create-update-kcn9v"] Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.868081 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64352a4d-f3af-44e1-b1d7-cc5e125de560-operator-scripts\") pod \"keystone-db-create-zhvln\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.868130 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4v6z\" (UniqueName: \"kubernetes.io/projected/64352a4d-f3af-44e1-b1d7-cc5e125de560-kube-api-access-q4v6z\") pod \"keystone-db-create-zhvln\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.869073 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64352a4d-f3af-44e1-b1d7-cc5e125de560-operator-scripts\") pod \"keystone-db-create-zhvln\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.887336 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4v6z\" (UniqueName: \"kubernetes.io/projected/64352a4d-f3af-44e1-b1d7-cc5e125de560-kube-api-access-q4v6z\") pod \"keystone-db-create-zhvln\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.943670 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.960793 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-rw8qr"] Feb 18 19:52:35 crc kubenswrapper[4932]: E0218 19:52:35.961256 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd41ee5-d385-424f-996a-b3baf7f9eb8a" containerName="mariadb-account-create-update" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.961275 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd41ee5-d385-424f-996a-b3baf7f9eb8a" containerName="mariadb-account-create-update" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.961438 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd41ee5-d385-424f-996a-b3baf7f9eb8a" containerName="mariadb-account-create-update" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.961995 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.970375 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-operator-scripts\") pod \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.970460 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkd5t\" (UniqueName: \"kubernetes.io/projected/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-kube-api-access-rkd5t\") pod \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.970663 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk24q\" (UniqueName: \"kubernetes.io/projected/56349fdd-8b87-4910-b182-555b5913d5ee-kube-api-access-jk24q\") pod \"keystone-bd21-account-create-update-kcn9v\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.970754 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56349fdd-8b87-4910-b182-555b5913d5ee-operator-scripts\") pod \"keystone-bd21-account-create-update-kcn9v\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.970813 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vggbn\" (UniqueName: \"kubernetes.io/projected/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-kube-api-access-vggbn\") pod \"placement-db-create-rw8qr\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.970849 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-operator-scripts\") pod \"placement-db-create-rw8qr\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.971763 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bd41ee5-d385-424f-996a-b3baf7f9eb8a" (UID: "3bd41ee5-d385-424f-996a-b3baf7f9eb8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.974814 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-kube-api-access-rkd5t" (OuterVolumeSpecName: "kube-api-access-rkd5t") pod "3bd41ee5-d385-424f-996a-b3baf7f9eb8a" (UID: "3bd41ee5-d385-424f-996a-b3baf7f9eb8a"). InnerVolumeSpecName "kube-api-access-rkd5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.976132 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rw8qr"] Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.038749 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.067767 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e952-account-create-update-jjrs6"] Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.071886 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.072066 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56349fdd-8b87-4910-b182-555b5913d5ee-operator-scripts\") pod \"keystone-bd21-account-create-update-kcn9v\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.072103 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vggbn\" (UniqueName: \"kubernetes.io/projected/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-kube-api-access-vggbn\") pod \"placement-db-create-rw8qr\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.072136 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-operator-scripts\") pod \"placement-db-create-rw8qr\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.072205 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk24q\" (UniqueName: \"kubernetes.io/projected/56349fdd-8b87-4910-b182-555b5913d5ee-kube-api-access-jk24q\") pod \"keystone-bd21-account-create-update-kcn9v\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.072279 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.072290 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkd5t\" (UniqueName: \"kubernetes.io/projected/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-kube-api-access-rkd5t\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.073219 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56349fdd-8b87-4910-b182-555b5913d5ee-operator-scripts\") pod \"keystone-bd21-account-create-update-kcn9v\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.073225 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-operator-scripts\") pod \"placement-db-create-rw8qr\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.075162 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.082507 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e952-account-create-update-jjrs6"] Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.096524 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk24q\" (UniqueName: \"kubernetes.io/projected/56349fdd-8b87-4910-b182-555b5913d5ee-kube-api-access-jk24q\") pod \"keystone-bd21-account-create-update-kcn9v\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.099080 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vggbn\" (UniqueName: \"kubernetes.io/projected/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-kube-api-access-vggbn\") pod \"placement-db-create-rw8qr\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.176623 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/35590261-332c-47e0-89e9-4eef3fd36086-kube-api-access-c7g9z\") pod \"placement-e952-account-create-update-jjrs6\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.177020 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35590261-332c-47e0-89e9-4eef3fd36086-operator-scripts\") pod \"placement-e952-account-create-update-jjrs6\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.250094 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.279393 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/35590261-332c-47e0-89e9-4eef3fd36086-kube-api-access-c7g9z\") pod \"placement-e952-account-create-update-jjrs6\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.279548 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35590261-332c-47e0-89e9-4eef3fd36086-operator-scripts\") pod \"placement-e952-account-create-update-jjrs6\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.281859 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.291641 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5833-account-create-update-fxm2t"] Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.291936 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35590261-332c-47e0-89e9-4eef3fd36086-operator-scripts\") pod \"placement-e952-account-create-update-jjrs6\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.308502 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/35590261-332c-47e0-89e9-4eef3fd36086-kube-api-access-c7g9z\") pod \"placement-e952-account-create-update-jjrs6\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.395287 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.401298 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-js74w"] Feb 18 19:52:36 crc kubenswrapper[4932]: W0218 19:52:36.408603 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4c8a6a6_4944_4c6f_be98_9dde833b89e5.slice/crio-a795285cf0cb2a54f1126955d37a9b6d8e276565b9e0d79ceb7a3f9ba32bad9b WatchSource:0}: Error finding container a795285cf0cb2a54f1126955d37a9b6d8e276565b9e0d79ceb7a3f9ba32bad9b: Status 404 returned error can't find the container with id a795285cf0cb2a54f1126955d37a9b6d8e276565b9e0d79ceb7a3f9ba32bad9b Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.523520 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zhvln"] Feb 18 19:52:36 crc kubenswrapper[4932]: W0218 19:52:36.566358 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64352a4d_f3af_44e1_b1d7_cc5e125de560.slice/crio-e989b7ee4c814fc4ee53473b2356b223211c09b3f0c143affed0c93ec3ad0f14 WatchSource:0}: Error finding container e989b7ee4c814fc4ee53473b2356b223211c09b3f0c143affed0c93ec3ad0f14: Status 404 returned error can't find the container with id e989b7ee4c814fc4ee53473b2356b223211c09b3f0c143affed0c93ec3ad0f14 Feb 18 19:52:36 crc kubenswrapper[4932]: E0218 19:52:36.706421 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.730939 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bd21-account-create-update-kcn9v"] Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.856209 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerStarted","Data":"1898dd90c7cb5f44526cee3dcba285d60ab2aa3db3c6ae91c6ffaee8a1e5c768"} Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.857167 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bd21-account-create-update-kcn9v" event={"ID":"56349fdd-8b87-4910-b182-555b5913d5ee","Type":"ContainerStarted","Data":"f9bb2d66e25bd07650a62fc4ddaa3bf964c84c4c8996178f6cc499147ca25363"} Feb 18 19:52:36 crc kubenswrapper[4932]: E0218 19:52:36.858247 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.859865 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zhvln" event={"ID":"64352a4d-f3af-44e1-b1d7-cc5e125de560","Type":"ContainerStarted","Data":"2cfcad461c33bcb694d12209c0cb7b72420cbc06fd09263f1f26b50ea451f974"} Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.859921 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zhvln" event={"ID":"64352a4d-f3af-44e1-b1d7-cc5e125de560","Type":"ContainerStarted","Data":"e989b7ee4c814fc4ee53473b2356b223211c09b3f0c143affed0c93ec3ad0f14"} Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.862194 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5833-account-create-update-fxm2t" event={"ID":"7fa1fef8-5a2e-4518-8641-d4b594fc29a3","Type":"ContainerStarted","Data":"3ce1a237abcba8eb5dacdaaf6767d6692224b8089fbea09e0b1408de503e1b1a"} Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.862228 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5833-account-create-update-fxm2t" event={"ID":"7fa1fef8-5a2e-4518-8641-d4b594fc29a3","Type":"ContainerStarted","Data":"ce3621b623070bf468ed09862ce254a24da7ba911cfd39df905bb1ca3d03fb1e"} Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.864375 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.864814 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-js74w" event={"ID":"c4c8a6a6-4944-4c6f-be98-9dde833b89e5","Type":"ContainerStarted","Data":"eeb81a13449459a4c7d2237c075a2110a61a815c3e8cc4a439843e5121373f28"} Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.864871 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-js74w" event={"ID":"c4c8a6a6-4944-4c6f-be98-9dde833b89e5","Type":"ContainerStarted","Data":"a795285cf0cb2a54f1126955d37a9b6d8e276565b9e0d79ceb7a3f9ba32bad9b"} Feb 18 19:52:36 crc kubenswrapper[4932]: W0218 19:52:36.925207 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35590261_332c_47e0_89e9_4eef3fd36086.slice/crio-a57a4488cd2456a40afe2cc7b60575c26f106f5d96e0170780f8f565f70f3047 WatchSource:0}: Error finding container a57a4488cd2456a40afe2cc7b60575c26f106f5d96e0170780f8f565f70f3047: Status 404 returned error can't find the container with id a57a4488cd2456a40afe2cc7b60575c26f106f5d96e0170780f8f565f70f3047 Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.926945 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rw8qr"] Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.928846 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-zhvln" podStartSLOduration=1.928829511 podStartE2EDuration="1.928829511s" podCreationTimestamp="2026-02-18 19:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:36.899033187 +0000 UTC m=+1120.480988032" watchObservedRunningTime="2026-02-18 19:52:36.928829511 +0000 UTC m=+1120.510784356" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.953487 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-5833-account-create-update-fxm2t" podStartSLOduration=2.953469677 podStartE2EDuration="2.953469677s" podCreationTimestamp="2026-02-18 19:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:36.916400744 +0000 UTC m=+1120.498355589" watchObservedRunningTime="2026-02-18 19:52:36.953469677 +0000 UTC m=+1120.535424522" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.954008 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e952-account-create-update-jjrs6"] Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.965477 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-js74w" podStartSLOduration=2.965458032 podStartE2EDuration="2.965458032s" podCreationTimestamp="2026-02-18 19:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:36.932706166 +0000 UTC m=+1120.514661011" watchObservedRunningTime="2026-02-18 19:52:36.965458032 +0000 UTC m=+1120.547412877" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.286566 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-vtbzd"] Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.287809 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.319020 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-vtbzd"] Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.407814 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bb1c31-7377-432f-8434-72981200f1ac-operator-scripts\") pod \"watcher-db-create-vtbzd\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.407896 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgsw\" (UniqueName: \"kubernetes.io/projected/02bb1c31-7377-432f-8434-72981200f1ac-kube-api-access-dzgsw\") pod \"watcher-db-create-vtbzd\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.468126 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-734d-account-create-update-stk6x"] Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.469860 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.471829 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.478626 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-734d-account-create-update-stk6x"] Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.509825 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bb1c31-7377-432f-8434-72981200f1ac-operator-scripts\") pod \"watcher-db-create-vtbzd\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.509890 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzgsw\" (UniqueName: \"kubernetes.io/projected/02bb1c31-7377-432f-8434-72981200f1ac-kube-api-access-dzgsw\") pod \"watcher-db-create-vtbzd\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.510736 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bb1c31-7377-432f-8434-72981200f1ac-operator-scripts\") pod \"watcher-db-create-vtbzd\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.540100 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzgsw\" (UniqueName: \"kubernetes.io/projected/02bb1c31-7377-432f-8434-72981200f1ac-kube-api-access-dzgsw\") pod \"watcher-db-create-vtbzd\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.611960 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hk7r\" (UniqueName: \"kubernetes.io/projected/bec590bc-e2ef-49e0-80be-27af6f69aa06-kube-api-access-4hk7r\") pod \"watcher-734d-account-create-update-stk6x\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.612044 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec590bc-e2ef-49e0-80be-27af6f69aa06-operator-scripts\") pod \"watcher-734d-account-create-update-stk6x\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.620676 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.713210 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec590bc-e2ef-49e0-80be-27af6f69aa06-operator-scripts\") pod \"watcher-734d-account-create-update-stk6x\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.714018 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hk7r\" (UniqueName: \"kubernetes.io/projected/bec590bc-e2ef-49e0-80be-27af6f69aa06-kube-api-access-4hk7r\") pod \"watcher-734d-account-create-update-stk6x\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.714602 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec590bc-e2ef-49e0-80be-27af6f69aa06-operator-scripts\") pod \"watcher-734d-account-create-update-stk6x\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.731377 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hk7r\" (UniqueName: \"kubernetes.io/projected/bec590bc-e2ef-49e0-80be-27af6f69aa06-kube-api-access-4hk7r\") pod \"watcher-734d-account-create-update-stk6x\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.785269 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.907959 4932 generic.go:334] "Generic (PLEG): container finished" podID="35590261-332c-47e0-89e9-4eef3fd36086" containerID="3b567de8b4f1ae33989815fad19a6d8b9f69d7df099f4fd8ff235740848c1cc0" exitCode=0 Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.908023 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e952-account-create-update-jjrs6" event={"ID":"35590261-332c-47e0-89e9-4eef3fd36086","Type":"ContainerDied","Data":"3b567de8b4f1ae33989815fad19a6d8b9f69d7df099f4fd8ff235740848c1cc0"} Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.908047 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e952-account-create-update-jjrs6" event={"ID":"35590261-332c-47e0-89e9-4eef3fd36086","Type":"ContainerStarted","Data":"a57a4488cd2456a40afe2cc7b60575c26f106f5d96e0170780f8f565f70f3047"} Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.911752 4932 generic.go:334] "Generic (PLEG): container finished" podID="56349fdd-8b87-4910-b182-555b5913d5ee" containerID="5e6b5516d234b57d2f859d33d51d54c0aee524d02399dad696a4642cf7cceb8a" exitCode=0 Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.911856 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bd21-account-create-update-kcn9v" event={"ID":"56349fdd-8b87-4910-b182-555b5913d5ee","Type":"ContainerDied","Data":"5e6b5516d234b57d2f859d33d51d54c0aee524d02399dad696a4642cf7cceb8a"} Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.914835 4932 generic.go:334] "Generic (PLEG): container finished" podID="26bd1cb1-1dcb-460e-ba19-eb8bef1951b5" containerID="439c6cd70d2e38e21f55a810c1fb66ab1e1dc66541977f85b2ca4f91d6caf61b" exitCode=0 Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.914890 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rw8qr" event={"ID":"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5","Type":"ContainerDied","Data":"439c6cd70d2e38e21f55a810c1fb66ab1e1dc66541977f85b2ca4f91d6caf61b"} Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.914909 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rw8qr" event={"ID":"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5","Type":"ContainerStarted","Data":"1d674917a20214787e1b8129748fdeaa37c9d2e1ee0acfb9283d23f2c9010653"} Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.917741 4932 generic.go:334] "Generic (PLEG): container finished" podID="64352a4d-f3af-44e1-b1d7-cc5e125de560" containerID="2cfcad461c33bcb694d12209c0cb7b72420cbc06fd09263f1f26b50ea451f974" exitCode=0 Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.917896 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zhvln" event={"ID":"64352a4d-f3af-44e1-b1d7-cc5e125de560","Type":"ContainerDied","Data":"2cfcad461c33bcb694d12209c0cb7b72420cbc06fd09263f1f26b50ea451f974"} Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.920960 4932 generic.go:334] "Generic (PLEG): container finished" podID="7fa1fef8-5a2e-4518-8641-d4b594fc29a3" containerID="3ce1a237abcba8eb5dacdaaf6767d6692224b8089fbea09e0b1408de503e1b1a" exitCode=0 Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.921004 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5833-account-create-update-fxm2t" event={"ID":"7fa1fef8-5a2e-4518-8641-d4b594fc29a3","Type":"ContainerDied","Data":"3ce1a237abcba8eb5dacdaaf6767d6692224b8089fbea09e0b1408de503e1b1a"} Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.926950 4932 generic.go:334] "Generic (PLEG): container finished" podID="c4c8a6a6-4944-4c6f-be98-9dde833b89e5" containerID="eeb81a13449459a4c7d2237c075a2110a61a815c3e8cc4a439843e5121373f28" exitCode=0 Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.927331 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-js74w" event={"ID":"c4c8a6a6-4944-4c6f-be98-9dde833b89e5","Type":"ContainerDied","Data":"eeb81a13449459a4c7d2237c075a2110a61a815c3e8cc4a439843e5121373f28"} Feb 18 19:52:37 crc kubenswrapper[4932]: E0218 19:52:37.932348 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" Feb 18 19:52:38 crc kubenswrapper[4932]: W0218 19:52:38.090507 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02bb1c31_7377_432f_8434_72981200f1ac.slice/crio-958a01ac916323c31c5c47742739c2ae448f55f9d886b9c1893b8e6c38c03bbf WatchSource:0}: Error finding container 958a01ac916323c31c5c47742739c2ae448f55f9d886b9c1893b8e6c38c03bbf: Status 404 returned error can't find the container with id 958a01ac916323c31c5c47742739c2ae448f55f9d886b9c1893b8e6c38c03bbf Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.102448 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-vtbzd"] Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.240679 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9pgp9"] Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.249236 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9pgp9"] Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.262284 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-734d-account-create-update-stk6x"] Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.938705 4932 generic.go:334] "Generic (PLEG): container finished" podID="bec590bc-e2ef-49e0-80be-27af6f69aa06" containerID="3a33312c61bc35aede7b854947f5cacef494c07faca9fd46ae2f217a195bc457" exitCode=0 Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.938961 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-734d-account-create-update-stk6x" event={"ID":"bec590bc-e2ef-49e0-80be-27af6f69aa06","Type":"ContainerDied","Data":"3a33312c61bc35aede7b854947f5cacef494c07faca9fd46ae2f217a195bc457"} Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.938985 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-734d-account-create-update-stk6x" event={"ID":"bec590bc-e2ef-49e0-80be-27af6f69aa06","Type":"ContainerStarted","Data":"9bf0c8b6e14124204af0268e3540567cb7b036f9d1ead456934ebd8e07330a8e"} Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.940805 4932 generic.go:334] "Generic (PLEG): container finished" podID="02bb1c31-7377-432f-8434-72981200f1ac" containerID="95e00440e590eb387c9cf8e2e2f9778a04bbe9e0e014879d57139cdcea3fd2d4" exitCode=0 Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.940858 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-vtbzd" event={"ID":"02bb1c31-7377-432f-8434-72981200f1ac","Type":"ContainerDied","Data":"95e00440e590eb387c9cf8e2e2f9778a04bbe9e0e014879d57139cdcea3fd2d4"} Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.940874 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-vtbzd" event={"ID":"02bb1c31-7377-432f-8434-72981200f1ac","Type":"ContainerStarted","Data":"958a01ac916323c31c5c47742739c2ae448f55f9d886b9c1893b8e6c38c03bbf"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.190985 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd41ee5-d385-424f-996a-b3baf7f9eb8a" path="/var/lib/kubelet/pods/3bd41ee5-d385-424f-996a-b3baf7f9eb8a/volumes" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.332033 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.454289 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-operator-scripts\") pod \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.454440 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mkrl\" (UniqueName: \"kubernetes.io/projected/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-kube-api-access-7mkrl\") pod \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.456827 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fa1fef8-5a2e-4518-8641-d4b594fc29a3" (UID: "7fa1fef8-5a2e-4518-8641-d4b594fc29a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.470494 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-kube-api-access-7mkrl" (OuterVolumeSpecName: "kube-api-access-7mkrl") pod "7fa1fef8-5a2e-4518-8641-d4b594fc29a3" (UID: "7fa1fef8-5a2e-4518-8641-d4b594fc29a3"). InnerVolumeSpecName "kube-api-access-7mkrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.557010 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.557047 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mkrl\" (UniqueName: \"kubernetes.io/projected/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-kube-api-access-7mkrl\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.660456 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-js74w" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.667403 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.672216 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.684449 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.695137 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.758889 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd8cd\" (UniqueName: \"kubernetes.io/projected/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-kube-api-access-dd8cd\") pod \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.759025 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-operator-scripts\") pod \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.759570 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4c8a6a6-4944-4c6f-be98-9dde833b89e5" (UID: "c4c8a6a6-4944-4c6f-be98-9dde833b89e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.763256 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-kube-api-access-dd8cd" (OuterVolumeSpecName: "kube-api-access-dd8cd") pod "c4c8a6a6-4944-4c6f-be98-9dde833b89e5" (UID: "c4c8a6a6-4944-4c6f-be98-9dde833b89e5"). InnerVolumeSpecName "kube-api-access-dd8cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860433 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vggbn\" (UniqueName: \"kubernetes.io/projected/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-kube-api-access-vggbn\") pod \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860510 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/35590261-332c-47e0-89e9-4eef3fd36086-kube-api-access-c7g9z\") pod \"35590261-332c-47e0-89e9-4eef3fd36086\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860556 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35590261-332c-47e0-89e9-4eef3fd36086-operator-scripts\") pod \"35590261-332c-47e0-89e9-4eef3fd36086\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860590 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk24q\" (UniqueName: \"kubernetes.io/projected/56349fdd-8b87-4910-b182-555b5913d5ee-kube-api-access-jk24q\") pod \"56349fdd-8b87-4910-b182-555b5913d5ee\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860628 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-operator-scripts\") pod \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860652 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64352a4d-f3af-44e1-b1d7-cc5e125de560-operator-scripts\") pod \"64352a4d-f3af-44e1-b1d7-cc5e125de560\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860690 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56349fdd-8b87-4910-b182-555b5913d5ee-operator-scripts\") pod \"56349fdd-8b87-4910-b182-555b5913d5ee\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860716 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4v6z\" (UniqueName: \"kubernetes.io/projected/64352a4d-f3af-44e1-b1d7-cc5e125de560-kube-api-access-q4v6z\") pod \"64352a4d-f3af-44e1-b1d7-cc5e125de560\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861045 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35590261-332c-47e0-89e9-4eef3fd36086-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35590261-332c-47e0-89e9-4eef3fd36086" (UID: "35590261-332c-47e0-89e9-4eef3fd36086"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861534 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64352a4d-f3af-44e1-b1d7-cc5e125de560-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64352a4d-f3af-44e1-b1d7-cc5e125de560" (UID: "64352a4d-f3af-44e1-b1d7-cc5e125de560"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861659 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd8cd\" (UniqueName: \"kubernetes.io/projected/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-kube-api-access-dd8cd\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861685 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35590261-332c-47e0-89e9-4eef3fd36086-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861698 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64352a4d-f3af-44e1-b1d7-cc5e125de560-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861710 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861895 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26bd1cb1-1dcb-460e-ba19-eb8bef1951b5" (UID: "26bd1cb1-1dcb-460e-ba19-eb8bef1951b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861933 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56349fdd-8b87-4910-b182-555b5913d5ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56349fdd-8b87-4910-b182-555b5913d5ee" (UID: "56349fdd-8b87-4910-b182-555b5913d5ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.863824 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56349fdd-8b87-4910-b182-555b5913d5ee-kube-api-access-jk24q" (OuterVolumeSpecName: "kube-api-access-jk24q") pod "56349fdd-8b87-4910-b182-555b5913d5ee" (UID: "56349fdd-8b87-4910-b182-555b5913d5ee"). InnerVolumeSpecName "kube-api-access-jk24q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.864297 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35590261-332c-47e0-89e9-4eef3fd36086-kube-api-access-c7g9z" (OuterVolumeSpecName: "kube-api-access-c7g9z") pod "35590261-332c-47e0-89e9-4eef3fd36086" (UID: "35590261-332c-47e0-89e9-4eef3fd36086"). InnerVolumeSpecName "kube-api-access-c7g9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.864959 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64352a4d-f3af-44e1-b1d7-cc5e125de560-kube-api-access-q4v6z" (OuterVolumeSpecName: "kube-api-access-q4v6z") pod "64352a4d-f3af-44e1-b1d7-cc5e125de560" (UID: "64352a4d-f3af-44e1-b1d7-cc5e125de560"). InnerVolumeSpecName "kube-api-access-q4v6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.865002 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-kube-api-access-vggbn" (OuterVolumeSpecName: "kube-api-access-vggbn") pod "26bd1cb1-1dcb-460e-ba19-eb8bef1951b5" (UID: "26bd1cb1-1dcb-460e-ba19-eb8bef1951b5"). InnerVolumeSpecName "kube-api-access-vggbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.955050 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.955046 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bd21-account-create-update-kcn9v" event={"ID":"56349fdd-8b87-4910-b182-555b5913d5ee","Type":"ContainerDied","Data":"f9bb2d66e25bd07650a62fc4ddaa3bf964c84c4c8996178f6cc499147ca25363"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.955486 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9bb2d66e25bd07650a62fc4ddaa3bf964c84c4c8996178f6cc499147ca25363" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.959536 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rw8qr" event={"ID":"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5","Type":"ContainerDied","Data":"1d674917a20214787e1b8129748fdeaa37c9d2e1ee0acfb9283d23f2c9010653"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.959661 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d674917a20214787e1b8129748fdeaa37c9d2e1ee0acfb9283d23f2c9010653" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.959624 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962011 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962046 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5833-account-create-update-fxm2t" event={"ID":"7fa1fef8-5a2e-4518-8641-d4b594fc29a3","Type":"ContainerDied","Data":"ce3621b623070bf468ed09862ce254a24da7ba911cfd39df905bb1ca3d03fb1e"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962232 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce3621b623070bf468ed09862ce254a24da7ba911cfd39df905bb1ca3d03fb1e" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962668 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vggbn\" (UniqueName: \"kubernetes.io/projected/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-kube-api-access-vggbn\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962694 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/35590261-332c-47e0-89e9-4eef3fd36086-kube-api-access-c7g9z\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962707 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk24q\" (UniqueName: \"kubernetes.io/projected/56349fdd-8b87-4910-b182-555b5913d5ee-kube-api-access-jk24q\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962720 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962731 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56349fdd-8b87-4910-b182-555b5913d5ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962742 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4v6z\" (UniqueName: \"kubernetes.io/projected/64352a4d-f3af-44e1-b1d7-cc5e125de560-kube-api-access-q4v6z\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.964151 4932 generic.go:334] "Generic (PLEG): container finished" podID="4a133994-7b33-4db4-a923-5b90d51e47b9" containerID="f215ae9fbece324e9bd723a56e0e71d31c81d0090f9fea3975b162ab4d64e974" exitCode=0 Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.964189 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"4a133994-7b33-4db4-a923-5b90d51e47b9","Type":"ContainerDied","Data":"f215ae9fbece324e9bd723a56e0e71d31c81d0090f9fea3975b162ab4d64e974"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.966959 4932 generic.go:334] "Generic (PLEG): container finished" podID="cd547864-4d03-45ae-8bb1-10a360d36599" containerID="7410562445bbd85ecddd8f8fa1c64974cd82f5bccf5b814dba01368f2c897a68" exitCode=0 Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.967057 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd547864-4d03-45ae-8bb1-10a360d36599","Type":"ContainerDied","Data":"7410562445bbd85ecddd8f8fa1c64974cd82f5bccf5b814dba01368f2c897a68"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.971663 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e952-account-create-update-jjrs6" event={"ID":"35590261-332c-47e0-89e9-4eef3fd36086","Type":"ContainerDied","Data":"a57a4488cd2456a40afe2cc7b60575c26f106f5d96e0170780f8f565f70f3047"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.971729 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a57a4488cd2456a40afe2cc7b60575c26f106f5d96e0170780f8f565f70f3047" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.971823 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.978417 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-js74w" event={"ID":"c4c8a6a6-4944-4c6f-be98-9dde833b89e5","Type":"ContainerDied","Data":"a795285cf0cb2a54f1126955d37a9b6d8e276565b9e0d79ceb7a3f9ba32bad9b"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.978492 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a795285cf0cb2a54f1126955d37a9b6d8e276565b9e0d79ceb7a3f9ba32bad9b" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.978545 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-js74w" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.982226 4932 generic.go:334] "Generic (PLEG): container finished" podID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerID="9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d" exitCode=0 Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.982519 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c","Type":"ContainerDied","Data":"9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.990094 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.990368 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zhvln" event={"ID":"64352a4d-f3af-44e1-b1d7-cc5e125de560","Type":"ContainerDied","Data":"e989b7ee4c814fc4ee53473b2356b223211c09b3f0c143affed0c93ec3ad0f14"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.990414 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e989b7ee4c814fc4ee53473b2356b223211c09b3f0c143affed0c93ec3ad0f14" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.088376 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-99qbh" podUID="039d44bb-1ad0-4916-8ef2-3cece4829506" containerName="ovn-controller" probeResult="failure" output=< Feb 18 19:52:40 crc kubenswrapper[4932]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 19:52:40 crc kubenswrapper[4932]: > Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.147605 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.158849 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.400922 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.420653 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-99qbh-config-487hr"] Feb 18 19:52:40 crc kubenswrapper[4932]: E0218 19:52:40.420987 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64352a4d-f3af-44e1-b1d7-cc5e125de560" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421002 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="64352a4d-f3af-44e1-b1d7-cc5e125de560" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: E0218 19:52:40.421013 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec590bc-e2ef-49e0-80be-27af6f69aa06" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421020 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec590bc-e2ef-49e0-80be-27af6f69aa06" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: E0218 19:52:40.421029 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56349fdd-8b87-4910-b182-555b5913d5ee" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421035 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="56349fdd-8b87-4910-b182-555b5913d5ee" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: E0218 19:52:40.421046 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26bd1cb1-1dcb-460e-ba19-eb8bef1951b5" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421051 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="26bd1cb1-1dcb-460e-ba19-eb8bef1951b5" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: E0218 19:52:40.421071 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa1fef8-5a2e-4518-8641-d4b594fc29a3" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421076 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa1fef8-5a2e-4518-8641-d4b594fc29a3" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: E0218 19:52:40.421090 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35590261-332c-47e0-89e9-4eef3fd36086" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421096 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="35590261-332c-47e0-89e9-4eef3fd36086" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: E0218 19:52:40.421106 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c8a6a6-4944-4c6f-be98-9dde833b89e5" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421112 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c8a6a6-4944-4c6f-be98-9dde833b89e5" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421482 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="26bd1cb1-1dcb-460e-ba19-eb8bef1951b5" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421496 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="35590261-332c-47e0-89e9-4eef3fd36086" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421507 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec590bc-e2ef-49e0-80be-27af6f69aa06" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421516 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa1fef8-5a2e-4518-8641-d4b594fc29a3" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421525 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="64352a4d-f3af-44e1-b1d7-cc5e125de560" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421534 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c8a6a6-4944-4c6f-be98-9dde833b89e5" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421543 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="56349fdd-8b87-4910-b182-555b5913d5ee" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.422096 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.427467 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.432705 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-99qbh-config-487hr"] Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.483104 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec590bc-e2ef-49e0-80be-27af6f69aa06-operator-scripts\") pod \"bec590bc-e2ef-49e0-80be-27af6f69aa06\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.483323 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hk7r\" (UniqueName: \"kubernetes.io/projected/bec590bc-e2ef-49e0-80be-27af6f69aa06-kube-api-access-4hk7r\") pod \"bec590bc-e2ef-49e0-80be-27af6f69aa06\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.484150 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec590bc-e2ef-49e0-80be-27af6f69aa06-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bec590bc-e2ef-49e0-80be-27af6f69aa06" (UID: "bec590bc-e2ef-49e0-80be-27af6f69aa06"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.487886 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec590bc-e2ef-49e0-80be-27af6f69aa06-kube-api-access-4hk7r" (OuterVolumeSpecName: "kube-api-access-4hk7r") pod "bec590bc-e2ef-49e0-80be-27af6f69aa06" (UID: "bec590bc-e2ef-49e0-80be-27af6f69aa06"). InnerVolumeSpecName "kube-api-access-4hk7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.489856 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.584803 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzgsw\" (UniqueName: \"kubernetes.io/projected/02bb1c31-7377-432f-8434-72981200f1ac-kube-api-access-dzgsw\") pod \"02bb1c31-7377-432f-8434-72981200f1ac\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.584889 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bb1c31-7377-432f-8434-72981200f1ac-operator-scripts\") pod \"02bb1c31-7377-432f-8434-72981200f1ac\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585111 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585142 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-scripts\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585193 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-log-ovn\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585246 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-additional-scripts\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585271 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run-ovn\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585312 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tpft\" (UniqueName: \"kubernetes.io/projected/69b0a2f7-a409-4d7e-b126-7b494c71503c-kube-api-access-4tpft\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585386 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02bb1c31-7377-432f-8434-72981200f1ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02bb1c31-7377-432f-8434-72981200f1ac" (UID: "02bb1c31-7377-432f-8434-72981200f1ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585586 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hk7r\" (UniqueName: \"kubernetes.io/projected/bec590bc-e2ef-49e0-80be-27af6f69aa06-kube-api-access-4hk7r\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585605 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bb1c31-7377-432f-8434-72981200f1ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585617 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec590bc-e2ef-49e0-80be-27af6f69aa06-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.588985 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02bb1c31-7377-432f-8434-72981200f1ac-kube-api-access-dzgsw" (OuterVolumeSpecName: "kube-api-access-dzgsw") pod "02bb1c31-7377-432f-8434-72981200f1ac" (UID: "02bb1c31-7377-432f-8434-72981200f1ac"). InnerVolumeSpecName "kube-api-access-dzgsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.686484 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-additional-scripts\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.686724 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run-ovn\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.686855 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tpft\" (UniqueName: \"kubernetes.io/projected/69b0a2f7-a409-4d7e-b126-7b494c71503c-kube-api-access-4tpft\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.686977 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.687057 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-scripts\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.687110 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.687110 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run-ovn\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.687366 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-log-ovn\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.687498 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-log-ovn\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.687668 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzgsw\" (UniqueName: \"kubernetes.io/projected/02bb1c31-7377-432f-8434-72981200f1ac-kube-api-access-dzgsw\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.688553 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-additional-scripts\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.689479 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-scripts\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.708097 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tpft\" (UniqueName: \"kubernetes.io/projected/69b0a2f7-a409-4d7e-b126-7b494c71503c-kube-api-access-4tpft\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.789216 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.001866 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"4a133994-7b33-4db4-a923-5b90d51e47b9","Type":"ContainerStarted","Data":"b79edae97107fb27d431eaa24e13cb7b0ff20b985becaeebac3ad72d18abaf73"} Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.002543 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.006453 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd547864-4d03-45ae-8bb1-10a360d36599","Type":"ContainerStarted","Data":"70c0ba22a4bf84fc3b05812bcef99a157180fd838ac2af05d6ca1de21cd9e980"} Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.006676 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.011257 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.011264 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-vtbzd" event={"ID":"02bb1c31-7377-432f-8434-72981200f1ac","Type":"ContainerDied","Data":"958a01ac916323c31c5c47742739c2ae448f55f9d886b9c1893b8e6c38c03bbf"} Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.011638 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="958a01ac916323c31c5c47742739c2ae448f55f9d886b9c1893b8e6c38c03bbf" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.015042 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c","Type":"ContainerStarted","Data":"7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9"} Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.015831 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.021207 4932 generic.go:334] "Generic (PLEG): container finished" podID="04953cd9-9de3-46b5-8b86-382b2d2291cd" containerID="3511d7edf13acf4b55c85650c80b80f04682a6a62d5515928313b0d0eefcc028" exitCode=0 Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.021272 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sq9sk" event={"ID":"04953cd9-9de3-46b5-8b86-382b2d2291cd","Type":"ContainerDied","Data":"3511d7edf13acf4b55c85650c80b80f04682a6a62d5515928313b0d0eefcc028"} Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.024594 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.024679 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-734d-account-create-update-stk6x" event={"ID":"bec590bc-e2ef-49e0-80be-27af6f69aa06","Type":"ContainerDied","Data":"9bf0c8b6e14124204af0268e3540567cb7b036f9d1ead456934ebd8e07330a8e"} Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.024702 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bf0c8b6e14124204af0268e3540567cb7b036f9d1ead456934ebd8e07330a8e" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.039410 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=53.604612272 podStartE2EDuration="1m1.039391579s" podCreationTimestamp="2026-02-18 19:51:40 +0000 UTC" firstStartedPulling="2026-02-18 19:51:56.87491046 +0000 UTC m=+1080.456865325" lastFinishedPulling="2026-02-18 19:52:04.309689787 +0000 UTC m=+1087.891644632" observedRunningTime="2026-02-18 19:52:41.034789585 +0000 UTC m=+1124.616744430" watchObservedRunningTime="2026-02-18 19:52:41.039391579 +0000 UTC m=+1124.621346424" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.088671 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=55.194396138 podStartE2EDuration="1m2.088653542s" podCreationTimestamp="2026-02-18 19:51:39 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.490743719 +0000 UTC m=+1081.072698554" lastFinishedPulling="2026-02-18 19:52:04.385001113 +0000 UTC m=+1087.966955958" observedRunningTime="2026-02-18 19:52:41.085645348 +0000 UTC m=+1124.667600193" watchObservedRunningTime="2026-02-18 19:52:41.088653542 +0000 UTC m=+1124.670608387" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.253884 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.721346595 podStartE2EDuration="1m1.25386013s" podCreationTimestamp="2026-02-18 19:51:40 +0000 UTC" firstStartedPulling="2026-02-18 19:51:56.851492633 +0000 UTC m=+1080.433447478" lastFinishedPulling="2026-02-18 19:52:04.384006168 +0000 UTC m=+1087.965961013" observedRunningTime="2026-02-18 19:52:41.111653378 +0000 UTC m=+1124.693608223" watchObservedRunningTime="2026-02-18 19:52:41.25386013 +0000 UTC m=+1124.835814975" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.261110 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-99qbh-config-487hr"] Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.879547 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zwtz9"] Feb 18 19:52:41 crc kubenswrapper[4932]: E0218 19:52:41.881564 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02bb1c31-7377-432f-8434-72981200f1ac" containerName="mariadb-database-create" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.881666 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="02bb1c31-7377-432f-8434-72981200f1ac" containerName="mariadb-database-create" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.881965 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="02bb1c31-7377-432f-8434-72981200f1ac" containerName="mariadb-database-create" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.882701 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.887749 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.894905 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zwtz9"] Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.014252 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-operator-scripts\") pod \"root-account-create-update-zwtz9\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.014368 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcg8c\" (UniqueName: \"kubernetes.io/projected/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-kube-api-access-lcg8c\") pod \"root-account-create-update-zwtz9\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.033395 4932 generic.go:334] "Generic (PLEG): container finished" podID="69b0a2f7-a409-4d7e-b126-7b494c71503c" containerID="da426b82651806673889b52158bea2dd7d720c322fbc355879403c25885c3ec1" exitCode=0 Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.033928 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-99qbh-config-487hr" event={"ID":"69b0a2f7-a409-4d7e-b126-7b494c71503c","Type":"ContainerDied","Data":"da426b82651806673889b52158bea2dd7d720c322fbc355879403c25885c3ec1"} Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.033957 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-99qbh-config-487hr" event={"ID":"69b0a2f7-a409-4d7e-b126-7b494c71503c","Type":"ContainerStarted","Data":"411c6fe91d47e61d1c94fa3daa641a9d9133f35200c9e298952ec6b73ebd7b7a"} Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.115695 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-operator-scripts\") pod \"root-account-create-update-zwtz9\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.115894 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcg8c\" (UniqueName: \"kubernetes.io/projected/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-kube-api-access-lcg8c\") pod \"root-account-create-update-zwtz9\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.116798 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-operator-scripts\") pod \"root-account-create-update-zwtz9\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.157133 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcg8c\" (UniqueName: \"kubernetes.io/projected/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-kube-api-access-lcg8c\") pod \"root-account-create-update-zwtz9\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.205385 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.519861 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.622029 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-dispersionconf\") pod \"04953cd9-9de3-46b5-8b86-382b2d2291cd\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.622092 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf5rv\" (UniqueName: \"kubernetes.io/projected/04953cd9-9de3-46b5-8b86-382b2d2291cd-kube-api-access-zf5rv\") pod \"04953cd9-9de3-46b5-8b86-382b2d2291cd\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.622782 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-combined-ca-bundle\") pod \"04953cd9-9de3-46b5-8b86-382b2d2291cd\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.622958 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-swiftconf\") pod \"04953cd9-9de3-46b5-8b86-382b2d2291cd\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.622989 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-scripts\") pod \"04953cd9-9de3-46b5-8b86-382b2d2291cd\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.623017 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-ring-data-devices\") pod \"04953cd9-9de3-46b5-8b86-382b2d2291cd\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.623040 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04953cd9-9de3-46b5-8b86-382b2d2291cd-etc-swift\") pod \"04953cd9-9de3-46b5-8b86-382b2d2291cd\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.623785 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "04953cd9-9de3-46b5-8b86-382b2d2291cd" (UID: "04953cd9-9de3-46b5-8b86-382b2d2291cd"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.624366 4932 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.625600 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04953cd9-9de3-46b5-8b86-382b2d2291cd-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "04953cd9-9de3-46b5-8b86-382b2d2291cd" (UID: "04953cd9-9de3-46b5-8b86-382b2d2291cd"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.632548 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "04953cd9-9de3-46b5-8b86-382b2d2291cd" (UID: "04953cd9-9de3-46b5-8b86-382b2d2291cd"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.646094 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "04953cd9-9de3-46b5-8b86-382b2d2291cd" (UID: "04953cd9-9de3-46b5-8b86-382b2d2291cd"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.647496 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04953cd9-9de3-46b5-8b86-382b2d2291cd-kube-api-access-zf5rv" (OuterVolumeSpecName: "kube-api-access-zf5rv") pod "04953cd9-9de3-46b5-8b86-382b2d2291cd" (UID: "04953cd9-9de3-46b5-8b86-382b2d2291cd"). InnerVolumeSpecName "kube-api-access-zf5rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.653322 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04953cd9-9de3-46b5-8b86-382b2d2291cd" (UID: "04953cd9-9de3-46b5-8b86-382b2d2291cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.653663 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-scripts" (OuterVolumeSpecName: "scripts") pod "04953cd9-9de3-46b5-8b86-382b2d2291cd" (UID: "04953cd9-9de3-46b5-8b86-382b2d2291cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.676613 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zwtz9"] Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.726234 4932 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.726266 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf5rv\" (UniqueName: \"kubernetes.io/projected/04953cd9-9de3-46b5-8b86-382b2d2291cd-kube-api-access-zf5rv\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.726276 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.726284 4932 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.726293 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.726301 4932 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04953cd9-9de3-46b5-8b86-382b2d2291cd-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.041901 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwtz9" event={"ID":"70c7b26f-6d2e-4fcd-8240-ca10bd148c99","Type":"ContainerStarted","Data":"f9f90dc57da26de1688aea88788204ac610c6fa3970ee4965c6add216640da6a"} Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.042954 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwtz9" event={"ID":"70c7b26f-6d2e-4fcd-8240-ca10bd148c99","Type":"ContainerStarted","Data":"04e61d84b5f812d0e74e2f99dfbb5e3d031730c69da014b9da8a1364dff80ae4"} Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.043821 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.044841 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sq9sk" event={"ID":"04953cd9-9de3-46b5-8b86-382b2d2291cd","Type":"ContainerDied","Data":"ed4ba0f7587a73b183dcf28620debb7555eadcf63d796c9e3aed7de82b80093c"} Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.044945 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed4ba0f7587a73b183dcf28620debb7555eadcf63d796c9e3aed7de82b80093c" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.063064 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-zwtz9" podStartSLOduration=2.06304575 podStartE2EDuration="2.06304575s" podCreationTimestamp="2026-02-18 19:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:43.059111353 +0000 UTC m=+1126.641066218" watchObservedRunningTime="2026-02-18 19:52:43.06304575 +0000 UTC m=+1126.645000595" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.460183 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540140 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-scripts\") pod \"69b0a2f7-a409-4d7e-b126-7b494c71503c\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540213 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tpft\" (UniqueName: \"kubernetes.io/projected/69b0a2f7-a409-4d7e-b126-7b494c71503c-kube-api-access-4tpft\") pod \"69b0a2f7-a409-4d7e-b126-7b494c71503c\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540268 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run\") pod \"69b0a2f7-a409-4d7e-b126-7b494c71503c\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540297 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-additional-scripts\") pod \"69b0a2f7-a409-4d7e-b126-7b494c71503c\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540348 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run-ovn\") pod \"69b0a2f7-a409-4d7e-b126-7b494c71503c\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540480 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-log-ovn\") pod \"69b0a2f7-a409-4d7e-b126-7b494c71503c\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540819 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "69b0a2f7-a409-4d7e-b126-7b494c71503c" (UID: "69b0a2f7-a409-4d7e-b126-7b494c71503c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540855 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run" (OuterVolumeSpecName: "var-run") pod "69b0a2f7-a409-4d7e-b126-7b494c71503c" (UID: "69b0a2f7-a409-4d7e-b126-7b494c71503c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.541610 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "69b0a2f7-a409-4d7e-b126-7b494c71503c" (UID: "69b0a2f7-a409-4d7e-b126-7b494c71503c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.541651 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "69b0a2f7-a409-4d7e-b126-7b494c71503c" (UID: "69b0a2f7-a409-4d7e-b126-7b494c71503c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.542048 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-scripts" (OuterVolumeSpecName: "scripts") pod "69b0a2f7-a409-4d7e-b126-7b494c71503c" (UID: "69b0a2f7-a409-4d7e-b126-7b494c71503c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.545319 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b0a2f7-a409-4d7e-b126-7b494c71503c-kube-api-access-4tpft" (OuterVolumeSpecName: "kube-api-access-4tpft") pod "69b0a2f7-a409-4d7e-b126-7b494c71503c" (UID: "69b0a2f7-a409-4d7e-b126-7b494c71503c"). InnerVolumeSpecName "kube-api-access-4tpft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.642834 4932 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.642870 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.642881 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tpft\" (UniqueName: \"kubernetes.io/projected/69b0a2f7-a409-4d7e-b126-7b494c71503c-kube-api-access-4tpft\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.642893 4932 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.642903 4932 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.642911 4932 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:44 crc kubenswrapper[4932]: I0218 19:52:44.052422 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-99qbh-config-487hr" event={"ID":"69b0a2f7-a409-4d7e-b126-7b494c71503c","Type":"ContainerDied","Data":"411c6fe91d47e61d1c94fa3daa641a9d9133f35200c9e298952ec6b73ebd7b7a"} Feb 18 19:52:44 crc kubenswrapper[4932]: I0218 19:52:44.052466 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="411c6fe91d47e61d1c94fa3daa641a9d9133f35200c9e298952ec6b73ebd7b7a" Feb 18 19:52:44 crc kubenswrapper[4932]: I0218 19:52:44.052533 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:44 crc kubenswrapper[4932]: I0218 19:52:44.063551 4932 generic.go:334] "Generic (PLEG): container finished" podID="70c7b26f-6d2e-4fcd-8240-ca10bd148c99" containerID="f9f90dc57da26de1688aea88788204ac610c6fa3970ee4965c6add216640da6a" exitCode=0 Feb 18 19:52:44 crc kubenswrapper[4932]: I0218 19:52:44.063600 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwtz9" event={"ID":"70c7b26f-6d2e-4fcd-8240-ca10bd148c99","Type":"ContainerDied","Data":"f9f90dc57da26de1688aea88788204ac610c6fa3970ee4965c6add216640da6a"} Feb 18 19:52:44 crc kubenswrapper[4932]: I0218 19:52:44.575266 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-99qbh-config-487hr"] Feb 18 19:52:44 crc kubenswrapper[4932]: I0218 19:52:44.578903 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-99qbh-config-487hr"] Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.075015 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-rl7xx"] Feb 18 19:52:45 crc kubenswrapper[4932]: E0218 19:52:45.075373 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b0a2f7-a409-4d7e-b126-7b494c71503c" containerName="ovn-config" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.075387 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b0a2f7-a409-4d7e-b126-7b494c71503c" containerName="ovn-config" Feb 18 19:52:45 crc kubenswrapper[4932]: E0218 19:52:45.075401 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04953cd9-9de3-46b5-8b86-382b2d2291cd" containerName="swift-ring-rebalance" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.075406 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="04953cd9-9de3-46b5-8b86-382b2d2291cd" containerName="swift-ring-rebalance" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.075569 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b0a2f7-a409-4d7e-b126-7b494c71503c" containerName="ovn-config" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.075583 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="04953cd9-9de3-46b5-8b86-382b2d2291cd" containerName="swift-ring-rebalance" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.076127 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.078485 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.078740 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mx5f7" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.097939 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-99qbh" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.098740 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rl7xx"] Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.167681 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-combined-ca-bundle\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.167835 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-db-sync-config-data\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.168066 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlv4k\" (UniqueName: \"kubernetes.io/projected/1bbf2873-6ca9-4569-b5b6-3003511c02ba-kube-api-access-qlv4k\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.168147 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-config-data\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.189824 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b0a2f7-a409-4d7e-b126-7b494c71503c" path="/var/lib/kubelet/pods/69b0a2f7-a409-4d7e-b126-7b494c71503c/volumes" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.270049 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlv4k\" (UniqueName: \"kubernetes.io/projected/1bbf2873-6ca9-4569-b5b6-3003511c02ba-kube-api-access-qlv4k\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.270109 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-config-data\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.270183 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-combined-ca-bundle\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.270226 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-db-sync-config-data\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.275876 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-db-sync-config-data\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.277787 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-config-data\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.279029 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-combined-ca-bundle\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.293758 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlv4k\" (UniqueName: \"kubernetes.io/projected/1bbf2873-6ca9-4569-b5b6-3003511c02ba-kube-api-access-qlv4k\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.393298 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.395900 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.473237 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcg8c\" (UniqueName: \"kubernetes.io/projected/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-kube-api-access-lcg8c\") pod \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.474685 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70c7b26f-6d2e-4fcd-8240-ca10bd148c99" (UID: "70c7b26f-6d2e-4fcd-8240-ca10bd148c99"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.474752 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-operator-scripts\") pod \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.475520 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.478310 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-kube-api-access-lcg8c" (OuterVolumeSpecName: "kube-api-access-lcg8c") pod "70c7b26f-6d2e-4fcd-8240-ca10bd148c99" (UID: "70c7b26f-6d2e-4fcd-8240-ca10bd148c99"). InnerVolumeSpecName "kube-api-access-lcg8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.576878 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcg8c\" (UniqueName: \"kubernetes.io/projected/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-kube-api-access-lcg8c\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.970281 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rl7xx"] Feb 18 19:52:46 crc kubenswrapper[4932]: I0218 19:52:46.081672 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:46 crc kubenswrapper[4932]: I0218 19:52:46.081679 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwtz9" event={"ID":"70c7b26f-6d2e-4fcd-8240-ca10bd148c99","Type":"ContainerDied","Data":"04e61d84b5f812d0e74e2f99dfbb5e3d031730c69da014b9da8a1364dff80ae4"} Feb 18 19:52:46 crc kubenswrapper[4932]: I0218 19:52:46.081739 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04e61d84b5f812d0e74e2f99dfbb5e3d031730c69da014b9da8a1364dff80ae4" Feb 18 19:52:46 crc kubenswrapper[4932]: I0218 19:52:46.082684 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rl7xx" event={"ID":"1bbf2873-6ca9-4569-b5b6-3003511c02ba","Type":"ContainerStarted","Data":"f068e85210ddcd828af2d489d54882cc64dba8b583684c4c6f7597bf8f804826"} Feb 18 19:52:48 crc kubenswrapper[4932]: I0218 19:52:48.246864 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zwtz9"] Feb 18 19:52:48 crc kubenswrapper[4932]: I0218 19:52:48.260912 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zwtz9"] Feb 18 19:52:49 crc kubenswrapper[4932]: I0218 19:52:49.189234 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c7b26f-6d2e-4fcd-8240-ca10bd148c99" path="/var/lib/kubelet/pods/70c7b26f-6d2e-4fcd-8240-ca10bd148c99/volumes" Feb 18 19:52:50 crc kubenswrapper[4932]: I0218 19:52:50.459350 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:50 crc kubenswrapper[4932]: I0218 19:52:50.466106 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:50 crc kubenswrapper[4932]: I0218 19:52:50.595918 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 19:52:51 crc kubenswrapper[4932]: I0218 19:52:51.115867 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 18 19:52:51 crc kubenswrapper[4932]: I0218 19:52:51.425479 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 18 19:52:51 crc kubenswrapper[4932]: I0218 19:52:51.761000 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="4a133994-7b33-4db4-a923-5b90d51e47b9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.254572 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xbdgt"] Feb 18 19:52:53 crc kubenswrapper[4932]: E0218 19:52:53.255774 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c7b26f-6d2e-4fcd-8240-ca10bd148c99" containerName="mariadb-account-create-update" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.255793 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c7b26f-6d2e-4fcd-8240-ca10bd148c99" containerName="mariadb-account-create-update" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.256018 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c7b26f-6d2e-4fcd-8240-ca10bd148c99" containerName="mariadb-account-create-update" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.256658 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.263958 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.270344 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xbdgt"] Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.411632 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h44j5\" (UniqueName: \"kubernetes.io/projected/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-kube-api-access-h44j5\") pod \"root-account-create-update-xbdgt\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.411742 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-operator-scripts\") pod \"root-account-create-update-xbdgt\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.513137 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-operator-scripts\") pod \"root-account-create-update-xbdgt\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.513283 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h44j5\" (UniqueName: \"kubernetes.io/projected/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-kube-api-access-h44j5\") pod \"root-account-create-update-xbdgt\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.514225 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-operator-scripts\") pod \"root-account-create-update-xbdgt\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.535164 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h44j5\" (UniqueName: \"kubernetes.io/projected/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-kube-api-access-h44j5\") pod \"root-account-create-update-xbdgt\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.573653 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:57 crc kubenswrapper[4932]: I0218 19:52:57.606006 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:52:57 crc kubenswrapper[4932]: I0218 19:52:57.606549 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:52:57 crc kubenswrapper[4932]: I0218 19:52:57.606601 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:52:57 crc kubenswrapper[4932]: I0218 19:52:57.608043 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"435f6d4431c63fe1b1d0a709b03d86681659a5d37fb618d6ab36ba1010fce349"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:52:57 crc kubenswrapper[4932]: I0218 19:52:57.608112 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://435f6d4431c63fe1b1d0a709b03d86681659a5d37fb618d6ab36ba1010fce349" gracePeriod=600 Feb 18 19:52:58 crc kubenswrapper[4932]: I0218 19:52:58.195235 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="435f6d4431c63fe1b1d0a709b03d86681659a5d37fb618d6ab36ba1010fce349" exitCode=0 Feb 18 19:52:58 crc kubenswrapper[4932]: I0218 19:52:58.195294 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"435f6d4431c63fe1b1d0a709b03d86681659a5d37fb618d6ab36ba1010fce349"} Feb 18 19:52:58 crc kubenswrapper[4932]: I0218 19:52:58.195334 4932 scope.go:117] "RemoveContainer" containerID="0796b82991176676a1533452d61ed93202733b7f85192cab295504d343f7c992" Feb 18 19:52:58 crc kubenswrapper[4932]: I0218 19:52:58.815348 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xbdgt"] Feb 18 19:52:58 crc kubenswrapper[4932]: W0218 19:52:58.836371 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb4a050_ebc6_4319_b27f_9c9cce058ec1.slice/crio-f292fccc752de47337eef5d251f520660567033b668f7756cde4342332ac7066 WatchSource:0}: Error finding container f292fccc752de47337eef5d251f520660567033b668f7756cde4342332ac7066: Status 404 returned error can't find the container with id f292fccc752de47337eef5d251f520660567033b668f7756cde4342332ac7066 Feb 18 19:52:58 crc kubenswrapper[4932]: I0218 19:52:58.841015 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.136909 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.204827 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xbdgt" event={"ID":"3eb4a050-ebc6-4319-b27f-9c9cce058ec1","Type":"ContainerStarted","Data":"38fb496c61ec368b9f0d3847ea90156e96e96daa825692bcb6b0867b238ef4ee"} Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.204867 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xbdgt" event={"ID":"3eb4a050-ebc6-4319-b27f-9c9cce058ec1","Type":"ContainerStarted","Data":"f292fccc752de47337eef5d251f520660567033b668f7756cde4342332ac7066"} Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.206571 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"b0c4ca69d7b367f202571359f1930a62d621ebd002b1f653a6e0ef9c09429ba2"} Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.210024 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerStarted","Data":"df6328f727f0438e57305c9925166837d62f5032c8dd58e3ace63bdd0cdad46f"} Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.213346 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"691ac26b2e0eb4976dab73dc438ad2163dc0ad731157e8dbe0e2c19541cba856"} Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.226205 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xbdgt" podStartSLOduration=6.226168404 podStartE2EDuration="6.226168404s" podCreationTimestamp="2026-02-18 19:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:59.219057668 +0000 UTC m=+1142.801012513" watchObservedRunningTime="2026-02-18 19:52:59.226168404 +0000 UTC m=+1142.808123249" Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.251779 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=11.123133462 podStartE2EDuration="1m12.251761394s" podCreationTimestamp="2026-02-18 19:51:47 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.4408188 +0000 UTC m=+1081.022773645" lastFinishedPulling="2026-02-18 19:52:58.569446712 +0000 UTC m=+1142.151401577" observedRunningTime="2026-02-18 19:52:59.241806129 +0000 UTC m=+1142.823760984" watchObservedRunningTime="2026-02-18 19:52:59.251761394 +0000 UTC m=+1142.833716239" Feb 18 19:53:00 crc kubenswrapper[4932]: I0218 19:53:00.224569 4932 generic.go:334] "Generic (PLEG): container finished" podID="3eb4a050-ebc6-4319-b27f-9c9cce058ec1" containerID="38fb496c61ec368b9f0d3847ea90156e96e96daa825692bcb6b0867b238ef4ee" exitCode=0 Feb 18 19:53:00 crc kubenswrapper[4932]: I0218 19:53:00.224752 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xbdgt" event={"ID":"3eb4a050-ebc6-4319-b27f-9c9cce058ec1","Type":"ContainerDied","Data":"38fb496c61ec368b9f0d3847ea90156e96e96daa825692bcb6b0867b238ef4ee"} Feb 18 19:53:00 crc kubenswrapper[4932]: I0218 19:53:00.227867 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rl7xx" event={"ID":"1bbf2873-6ca9-4569-b5b6-3003511c02ba","Type":"ContainerStarted","Data":"104353923ef97f2e6933dbfcfbfc2a9125473f1373667e2eb5163afb4316da88"} Feb 18 19:53:00 crc kubenswrapper[4932]: I0218 19:53:00.229599 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"e0f5557ac4379d0c6d366e9f385551b3db089d0f37a9e3b044ddf7c3b9791d40"} Feb 18 19:53:00 crc kubenswrapper[4932]: I0218 19:53:00.229652 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"0d4d6491dfafa508897f8c85011f34a4de9a81466406275234a2c4a77196ad9e"} Feb 18 19:53:00 crc kubenswrapper[4932]: I0218 19:53:00.294625 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-rl7xx" podStartSLOduration=2.677637061 podStartE2EDuration="15.294594993s" podCreationTimestamp="2026-02-18 19:52:45 +0000 UTC" firstStartedPulling="2026-02-18 19:52:45.980537141 +0000 UTC m=+1129.562491986" lastFinishedPulling="2026-02-18 19:52:58.597495073 +0000 UTC m=+1142.179449918" observedRunningTime="2026-02-18 19:53:00.267300681 +0000 UTC m=+1143.849255536" watchObservedRunningTime="2026-02-18 19:53:00.294594993 +0000 UTC m=+1143.876549878" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.116423 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.241412 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"b6f0778a169dc19434921249b3093342769dcc715b273057dabc336fa9eceb1f"} Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.241456 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"c3a6ca6a9ed711f3d48080461dc1e16d45096f60bbdde6a4c51ccad3d79b1ae0"} Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.431038 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.670748 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-hvt6h"] Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.672023 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.695855 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hvt6h"] Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.750856 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xbdgt" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.761706 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.763140 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwdm7\" (UniqueName: \"kubernetes.io/projected/56734660-55cc-463c-89f2-131bc9109dab-kube-api-access-dwdm7\") pod \"barbican-db-create-hvt6h\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.763202 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56734660-55cc-463c-89f2-131bc9109dab-operator-scripts\") pod \"barbican-db-create-hvt6h\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.773739 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a65d-account-create-update-chx2v"] Feb 18 19:53:01 crc kubenswrapper[4932]: E0218 19:53:01.774072 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb4a050-ebc6-4319-b27f-9c9cce058ec1" containerName="mariadb-account-create-update" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.774088 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb4a050-ebc6-4319-b27f-9c9cce058ec1" containerName="mariadb-account-create-update" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.774266 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb4a050-ebc6-4319-b27f-9c9cce058ec1" containerName="mariadb-account-create-update" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.774772 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.776415 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.803333 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a65d-account-create-update-chx2v"] Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.863806 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h44j5\" (UniqueName: \"kubernetes.io/projected/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-kube-api-access-h44j5\") pod \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.863907 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-operator-scripts\") pod \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.864186 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-operator-scripts\") pod \"barbican-a65d-account-create-update-chx2v\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.864236 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64cqb\" (UniqueName: \"kubernetes.io/projected/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-kube-api-access-64cqb\") pod \"barbican-a65d-account-create-update-chx2v\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.864265 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwdm7\" (UniqueName: \"kubernetes.io/projected/56734660-55cc-463c-89f2-131bc9109dab-kube-api-access-dwdm7\") pod \"barbican-db-create-hvt6h\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.864314 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56734660-55cc-463c-89f2-131bc9109dab-operator-scripts\") pod \"barbican-db-create-hvt6h\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.865298 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3eb4a050-ebc6-4319-b27f-9c9cce058ec1" (UID: "3eb4a050-ebc6-4319-b27f-9c9cce058ec1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.865735 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56734660-55cc-463c-89f2-131bc9109dab-operator-scripts\") pod \"barbican-db-create-hvt6h\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.884613 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-kube-api-access-h44j5" (OuterVolumeSpecName: "kube-api-access-h44j5") pod "3eb4a050-ebc6-4319-b27f-9c9cce058ec1" (UID: "3eb4a050-ebc6-4319-b27f-9c9cce058ec1"). InnerVolumeSpecName "kube-api-access-h44j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.887967 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwdm7\" (UniqueName: \"kubernetes.io/projected/56734660-55cc-463c-89f2-131bc9109dab-kube-api-access-dwdm7\") pod \"barbican-db-create-hvt6h\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.967923 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-hn6qq"] Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.969051 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.969616 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-operator-scripts\") pod \"barbican-a65d-account-create-update-chx2v\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.969712 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64cqb\" (UniqueName: \"kubernetes.io/projected/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-kube-api-access-64cqb\") pod \"barbican-a65d-account-create-update-chx2v\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.969883 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.969930 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h44j5\" (UniqueName: \"kubernetes.io/projected/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-kube-api-access-h44j5\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.971853 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-operator-scripts\") pod \"barbican-a65d-account-create-update-chx2v\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.989044 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hn6qq"] Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.996885 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64cqb\" (UniqueName: \"kubernetes.io/projected/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-kube-api-access-64cqb\") pod \"barbican-a65d-account-create-update-chx2v\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.013559 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5bd9-account-create-update-7tv8h"] Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.025307 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.030872 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.064642 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5bd9-account-create-update-7tv8h"] Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.073266 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzrrj\" (UniqueName: \"kubernetes.io/projected/7680bf6b-efd6-452a-8900-09cf55b203ff-kube-api-access-mzrrj\") pod \"cinder-5bd9-account-create-update-7tv8h\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.073318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7680bf6b-efd6-452a-8900-09cf55b203ff-operator-scripts\") pod \"cinder-5bd9-account-create-update-7tv8h\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.073359 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7khwn\" (UniqueName: \"kubernetes.io/projected/f7988cea-6aa8-4552-8965-04b417c91831-kube-api-access-7khwn\") pod \"cinder-db-create-hn6qq\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.073426 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7988cea-6aa8-4552-8965-04b417c91831-operator-scripts\") pod \"cinder-db-create-hn6qq\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.073851 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.101596 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-h526s"] Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.102679 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.102769 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.113573 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-h526s"] Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.113718 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sk7x7" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.113818 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.114004 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.114125 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.175708 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5sds\" (UniqueName: \"kubernetes.io/projected/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-kube-api-access-r5sds\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.176104 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-config-data\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.176156 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzrrj\" (UniqueName: \"kubernetes.io/projected/7680bf6b-efd6-452a-8900-09cf55b203ff-kube-api-access-mzrrj\") pod \"cinder-5bd9-account-create-update-7tv8h\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.176197 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7680bf6b-efd6-452a-8900-09cf55b203ff-operator-scripts\") pod \"cinder-5bd9-account-create-update-7tv8h\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.176235 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7khwn\" (UniqueName: \"kubernetes.io/projected/f7988cea-6aa8-4552-8965-04b417c91831-kube-api-access-7khwn\") pod \"cinder-db-create-hn6qq\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.176266 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-combined-ca-bundle\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.176317 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7988cea-6aa8-4552-8965-04b417c91831-operator-scripts\") pod \"cinder-db-create-hn6qq\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.177006 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7988cea-6aa8-4552-8965-04b417c91831-operator-scripts\") pod \"cinder-db-create-hn6qq\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.177749 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7680bf6b-efd6-452a-8900-09cf55b203ff-operator-scripts\") pod \"cinder-5bd9-account-create-update-7tv8h\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.194721 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7khwn\" (UniqueName: \"kubernetes.io/projected/f7988cea-6aa8-4552-8965-04b417c91831-kube-api-access-7khwn\") pod \"cinder-db-create-hn6qq\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.194718 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzrrj\" (UniqueName: \"kubernetes.io/projected/7680bf6b-efd6-452a-8900-09cf55b203ff-kube-api-access-mzrrj\") pod \"cinder-5bd9-account-create-update-7tv8h\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.272947 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"bbee0ef35cd50681f80a2e01b2c6ec4191424d86c97c1f6ce0c7ff60a9945be4"} Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.272985 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"709a384ce96f65611223bccb435327dbaa1ac6245c2d8052bbba507d5da472de"} Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.272995 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"d93a37c9f3e4b5851f721ec371db06711dc32dc7d4eb50f229ad571de6bd5ab7"} Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.276540 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xbdgt" event={"ID":"3eb4a050-ebc6-4319-b27f-9c9cce058ec1","Type":"ContainerDied","Data":"f292fccc752de47337eef5d251f520660567033b668f7756cde4342332ac7066"} Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.276567 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f292fccc752de47337eef5d251f520660567033b668f7756cde4342332ac7066" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.276629 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xbdgt" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.280114 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-combined-ca-bundle\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.280213 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5sds\" (UniqueName: \"kubernetes.io/projected/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-kube-api-access-r5sds\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.280242 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-config-data\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.293764 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.296128 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-combined-ca-bundle\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.296563 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-config-data\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.323782 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5sds\" (UniqueName: \"kubernetes.io/projected/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-kube-api-access-r5sds\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.385281 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.444101 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.538814 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a65d-account-create-update-chx2v"] Feb 18 19:53:02 crc kubenswrapper[4932]: W0218 19:53:02.542912 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac9c39c2_bf9e_4f11_b37f_17089fce08e7.slice/crio-51583244c9cd178243965b225a45c783c841e20d65edf2a86b3f9db92e439266 WatchSource:0}: Error finding container 51583244c9cd178243965b225a45c783c841e20d65edf2a86b3f9db92e439266: Status 404 returned error can't find the container with id 51583244c9cd178243965b225a45c783c841e20d65edf2a86b3f9db92e439266 Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.676958 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hvt6h"] Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.930807 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hn6qq"] Feb 18 19:53:02 crc kubenswrapper[4932]: W0218 19:53:02.939255 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7988cea_6aa8_4552_8965_04b417c91831.slice/crio-d81c7286fbf08cc79bae9df6210ed5ad98c0dfda1ac48aec2223ff4bd51a0816 WatchSource:0}: Error finding container d81c7286fbf08cc79bae9df6210ed5ad98c0dfda1ac48aec2223ff4bd51a0816: Status 404 returned error can't find the container with id d81c7286fbf08cc79bae9df6210ed5ad98c0dfda1ac48aec2223ff4bd51a0816 Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.097269 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5bd9-account-create-update-7tv8h"] Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.156972 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-h526s"] Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.303023 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h526s" event={"ID":"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a","Type":"ContainerStarted","Data":"43fb9c3d5607cbfcdb2e71bfe2fe586c4b79577442d0345f45f7e3f3cb5eb6e7"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.318822 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hn6qq" event={"ID":"f7988cea-6aa8-4552-8965-04b417c91831","Type":"ContainerStarted","Data":"03cc21b056f77810add58b5621bb79299b2f95efe33228e5665e27461f3e50f3"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.318864 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hn6qq" event={"ID":"f7988cea-6aa8-4552-8965-04b417c91831","Type":"ContainerStarted","Data":"d81c7286fbf08cc79bae9df6210ed5ad98c0dfda1ac48aec2223ff4bd51a0816"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.339129 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hvt6h" event={"ID":"56734660-55cc-463c-89f2-131bc9109dab","Type":"ContainerStarted","Data":"979fb0febd6062fe5161812c56f74561bb0c81dc6ed2e8e26cb348d3275186d6"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.339191 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hvt6h" event={"ID":"56734660-55cc-463c-89f2-131bc9109dab","Type":"ContainerStarted","Data":"46b0c6beb77ba63cc4fb7bf59b078265e0be2b3e41c391faa1cfacce870602a0"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.347306 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5bd9-account-create-update-7tv8h" event={"ID":"7680bf6b-efd6-452a-8900-09cf55b203ff","Type":"ContainerStarted","Data":"c049a4b17cd32b5f24bb7e9e3ef0f21dbe83e94bb0e6b01b6cebe4cc220b64e4"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.349251 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-hn6qq" podStartSLOduration=2.3492288009999998 podStartE2EDuration="2.349228801s" podCreationTimestamp="2026-02-18 19:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:03.333490543 +0000 UTC m=+1146.915445378" watchObservedRunningTime="2026-02-18 19:53:03.349228801 +0000 UTC m=+1146.931183646" Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.352950 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"99a9b400b3f548a25cab483817236360ae6c0770b440b7691ce80970079bc52e"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.360364 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a65d-account-create-update-chx2v" event={"ID":"ac9c39c2-bf9e-4f11-b37f-17089fce08e7","Type":"ContainerStarted","Data":"48cdc7bd0a5fa5affdc3d044cfe0ccc940cdde09dc40fd9f4253e5cd4c996f16"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.360406 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a65d-account-create-update-chx2v" event={"ID":"ac9c39c2-bf9e-4f11-b37f-17089fce08e7","Type":"ContainerStarted","Data":"51583244c9cd178243965b225a45c783c841e20d65edf2a86b3f9db92e439266"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.371588 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-hvt6h" podStartSLOduration=2.371569621 podStartE2EDuration="2.371569621s" podCreationTimestamp="2026-02-18 19:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:03.364653521 +0000 UTC m=+1146.946608366" watchObservedRunningTime="2026-02-18 19:53:03.371569621 +0000 UTC m=+1146.953524466" Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.398229 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-a65d-account-create-update-chx2v" podStartSLOduration=2.398210107 podStartE2EDuration="2.398210107s" podCreationTimestamp="2026-02-18 19:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:03.38168237 +0000 UTC m=+1146.963637205" watchObservedRunningTime="2026-02-18 19:53:03.398210107 +0000 UTC m=+1146.980164952" Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.495968 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.496020 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.498458 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.525755 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-5bd9-account-create-update-7tv8h" podStartSLOduration=2.525735107 podStartE2EDuration="2.525735107s" podCreationTimestamp="2026-02-18 19:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:03.415563854 +0000 UTC m=+1146.997518699" watchObservedRunningTime="2026-02-18 19:53:03.525735107 +0000 UTC m=+1147.107689962" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.384459 4932 generic.go:334] "Generic (PLEG): container finished" podID="56734660-55cc-463c-89f2-131bc9109dab" containerID="979fb0febd6062fe5161812c56f74561bb0c81dc6ed2e8e26cb348d3275186d6" exitCode=0 Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.384718 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hvt6h" event={"ID":"56734660-55cc-463c-89f2-131bc9109dab","Type":"ContainerDied","Data":"979fb0febd6062fe5161812c56f74561bb0c81dc6ed2e8e26cb348d3275186d6"} Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.388961 4932 generic.go:334] "Generic (PLEG): container finished" podID="7680bf6b-efd6-452a-8900-09cf55b203ff" containerID="0d634b73a958b2e21485770f0ca87b0cc9a8038deca230cf324c0047e0c7f89e" exitCode=0 Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.389019 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5bd9-account-create-update-7tv8h" event={"ID":"7680bf6b-efd6-452a-8900-09cf55b203ff","Type":"ContainerDied","Data":"0d634b73a958b2e21485770f0ca87b0cc9a8038deca230cf324c0047e0c7f89e"} Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.393489 4932 generic.go:334] "Generic (PLEG): container finished" podID="ac9c39c2-bf9e-4f11-b37f-17089fce08e7" containerID="48cdc7bd0a5fa5affdc3d044cfe0ccc940cdde09dc40fd9f4253e5cd4c996f16" exitCode=0 Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.393827 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a65d-account-create-update-chx2v" event={"ID":"ac9c39c2-bf9e-4f11-b37f-17089fce08e7","Type":"ContainerDied","Data":"48cdc7bd0a5fa5affdc3d044cfe0ccc940cdde09dc40fd9f4253e5cd4c996f16"} Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.408039 4932 generic.go:334] "Generic (PLEG): container finished" podID="f7988cea-6aa8-4552-8965-04b417c91831" containerID="03cc21b056f77810add58b5621bb79299b2f95efe33228e5665e27461f3e50f3" exitCode=0 Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.409369 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hn6qq" event={"ID":"f7988cea-6aa8-4552-8965-04b417c91831","Type":"ContainerDied","Data":"03cc21b056f77810add58b5621bb79299b2f95efe33228e5665e27461f3e50f3"} Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.412311 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.732937 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-hbs76"] Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.734186 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.744963 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hbs76"] Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.835758 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-53f4-account-create-update-mh2bq"] Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.836756 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.841492 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.847789 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9deee6-7804-492e-88c9-147087152416-operator-scripts\") pod \"neutron-db-create-hbs76\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.847861 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvhxw\" (UniqueName: \"kubernetes.io/projected/0b9deee6-7804-492e-88c9-147087152416-kube-api-access-nvhxw\") pod \"neutron-db-create-hbs76\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.865227 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-53f4-account-create-update-mh2bq"] Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.895271 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-4ghxf"] Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.896336 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.900597 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-s5bnj" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.900794 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.940587 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-4ghxf"] Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949047 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-config-data\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949086 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhrgc\" (UniqueName: \"kubernetes.io/projected/ca3578cc-7bd4-4e77-8b29-bbb38f588260-kube-api-access-xhrgc\") pod \"neutron-53f4-account-create-update-mh2bq\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949122 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9deee6-7804-492e-88c9-147087152416-operator-scripts\") pod \"neutron-db-create-hbs76\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949309 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvhxw\" (UniqueName: \"kubernetes.io/projected/0b9deee6-7804-492e-88c9-147087152416-kube-api-access-nvhxw\") pod \"neutron-db-create-hbs76\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949380 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-db-sync-config-data\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949447 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca3578cc-7bd4-4e77-8b29-bbb38f588260-operator-scripts\") pod \"neutron-53f4-account-create-update-mh2bq\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949564 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2htl\" (UniqueName: \"kubernetes.io/projected/bc05154b-7f25-4fb1-8293-9aba06523c37-kube-api-access-s2htl\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949970 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9deee6-7804-492e-88c9-147087152416-operator-scripts\") pod \"neutron-db-create-hbs76\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.950459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-combined-ca-bundle\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.972921 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvhxw\" (UniqueName: \"kubernetes.io/projected/0b9deee6-7804-492e-88c9-147087152416-kube-api-access-nvhxw\") pod \"neutron-db-create-hbs76\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.051572 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-combined-ca-bundle\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.051671 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-config-data\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.051698 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhrgc\" (UniqueName: \"kubernetes.io/projected/ca3578cc-7bd4-4e77-8b29-bbb38f588260-kube-api-access-xhrgc\") pod \"neutron-53f4-account-create-update-mh2bq\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.051763 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-db-sync-config-data\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.051803 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca3578cc-7bd4-4e77-8b29-bbb38f588260-operator-scripts\") pod \"neutron-53f4-account-create-update-mh2bq\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.051853 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2htl\" (UniqueName: \"kubernetes.io/projected/bc05154b-7f25-4fb1-8293-9aba06523c37-kube-api-access-s2htl\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.053106 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca3578cc-7bd4-4e77-8b29-bbb38f588260-operator-scripts\") pod \"neutron-53f4-account-create-update-mh2bq\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.055818 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-config-data\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.055960 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.056583 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-db-sync-config-data\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.058808 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-combined-ca-bundle\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.070856 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2htl\" (UniqueName: \"kubernetes.io/projected/bc05154b-7f25-4fb1-8293-9aba06523c37-kube-api-access-s2htl\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.071657 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhrgc\" (UniqueName: \"kubernetes.io/projected/ca3578cc-7bd4-4e77-8b29-bbb38f588260-kube-api-access-xhrgc\") pod \"neutron-53f4-account-create-update-mh2bq\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.209352 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.239242 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.461835 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"56ce23bb6103e557429dc8690b553377257cd936a4bb509e20a8c92ae8b56a22"} Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.462387 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"09903fc1344e02ff0c9b44820c5f554d569cf63ffb8c7ab34e7b724c0902da20"} Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.569532 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hbs76"] Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.772410 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-53f4-account-create-update-mh2bq"] Feb 18 19:53:05 crc kubenswrapper[4932]: W0218 19:53:05.777197 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca3578cc_7bd4_4e77_8b29_bbb38f588260.slice/crio-43c082521df80e8aaf2a62b7e035919e319670ae64cf37a87221a05f6dd28385 WatchSource:0}: Error finding container 43c082521df80e8aaf2a62b7e035919e319670ae64cf37a87221a05f6dd28385: Status 404 returned error can't find the container with id 43c082521df80e8aaf2a62b7e035919e319670ae64cf37a87221a05f6dd28385 Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.122340 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.131249 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.174018 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwdm7\" (UniqueName: \"kubernetes.io/projected/56734660-55cc-463c-89f2-131bc9109dab-kube-api-access-dwdm7\") pod \"56734660-55cc-463c-89f2-131bc9109dab\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.174088 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzrrj\" (UniqueName: \"kubernetes.io/projected/7680bf6b-efd6-452a-8900-09cf55b203ff-kube-api-access-mzrrj\") pod \"7680bf6b-efd6-452a-8900-09cf55b203ff\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.174153 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7680bf6b-efd6-452a-8900-09cf55b203ff-operator-scripts\") pod \"7680bf6b-efd6-452a-8900-09cf55b203ff\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.174236 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56734660-55cc-463c-89f2-131bc9109dab-operator-scripts\") pod \"56734660-55cc-463c-89f2-131bc9109dab\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.175785 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56734660-55cc-463c-89f2-131bc9109dab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56734660-55cc-463c-89f2-131bc9109dab" (UID: "56734660-55cc-463c-89f2-131bc9109dab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.176586 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-4ghxf"] Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.176859 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7680bf6b-efd6-452a-8900-09cf55b203ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7680bf6b-efd6-452a-8900-09cf55b203ff" (UID: "7680bf6b-efd6-452a-8900-09cf55b203ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.182500 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56734660-55cc-463c-89f2-131bc9109dab-kube-api-access-dwdm7" (OuterVolumeSpecName: "kube-api-access-dwdm7") pod "56734660-55cc-463c-89f2-131bc9109dab" (UID: "56734660-55cc-463c-89f2-131bc9109dab"). InnerVolumeSpecName "kube-api-access-dwdm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.189359 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7680bf6b-efd6-452a-8900-09cf55b203ff-kube-api-access-mzrrj" (OuterVolumeSpecName: "kube-api-access-mzrrj") pod "7680bf6b-efd6-452a-8900-09cf55b203ff" (UID: "7680bf6b-efd6-452a-8900-09cf55b203ff"). InnerVolumeSpecName "kube-api-access-mzrrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.278406 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56734660-55cc-463c-89f2-131bc9109dab-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.278443 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwdm7\" (UniqueName: \"kubernetes.io/projected/56734660-55cc-463c-89f2-131bc9109dab-kube-api-access-dwdm7\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.278455 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzrrj\" (UniqueName: \"kubernetes.io/projected/7680bf6b-efd6-452a-8900-09cf55b203ff-kube-api-access-mzrrj\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.278464 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7680bf6b-efd6-452a-8900-09cf55b203ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.379572 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.423338 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.475696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4ghxf" event={"ID":"bc05154b-7f25-4fb1-8293-9aba06523c37","Type":"ContainerStarted","Data":"15675a36136757370796fc216004ca775eac02c9effd24c85dfe90820b2828ae"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.479920 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hbs76" event={"ID":"0b9deee6-7804-492e-88c9-147087152416","Type":"ContainerStarted","Data":"2a62cc7c92b0f61fc993f04377e1428679cd22afc955b4da72b0e6e2d00eb682"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.479950 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hbs76" event={"ID":"0b9deee6-7804-492e-88c9-147087152416","Type":"ContainerStarted","Data":"1ad029f86ec5f0d0a06dcaedd3663dce48eadadc46ce509227278c4f119388a2"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.494738 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hn6qq" event={"ID":"f7988cea-6aa8-4552-8965-04b417c91831","Type":"ContainerDied","Data":"d81c7286fbf08cc79bae9df6210ed5ad98c0dfda1ac48aec2223ff4bd51a0816"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.494765 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d81c7286fbf08cc79bae9df6210ed5ad98c0dfda1ac48aec2223ff4bd51a0816" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.494822 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.499544 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-hbs76" podStartSLOduration=2.499531085 podStartE2EDuration="2.499531085s" podCreationTimestamp="2026-02-18 19:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:06.49733142 +0000 UTC m=+1150.079286265" watchObservedRunningTime="2026-02-18 19:53:06.499531085 +0000 UTC m=+1150.081485930" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.518839 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hvt6h" event={"ID":"56734660-55cc-463c-89f2-131bc9109dab","Type":"ContainerDied","Data":"46b0c6beb77ba63cc4fb7bf59b078265e0be2b3e41c391faa1cfacce870602a0"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.519086 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46b0c6beb77ba63cc4fb7bf59b078265e0be2b3e41c391faa1cfacce870602a0" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.519141 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.532062 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5bd9-account-create-update-7tv8h" event={"ID":"7680bf6b-efd6-452a-8900-09cf55b203ff","Type":"ContainerDied","Data":"c049a4b17cd32b5f24bb7e9e3ef0f21dbe83e94bb0e6b01b6cebe4cc220b64e4"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.532092 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c049a4b17cd32b5f24bb7e9e3ef0f21dbe83e94bb0e6b01b6cebe4cc220b64e4" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.532101 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.588746 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7khwn\" (UniqueName: \"kubernetes.io/projected/f7988cea-6aa8-4552-8965-04b417c91831-kube-api-access-7khwn\") pod \"f7988cea-6aa8-4552-8965-04b417c91831\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.588929 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-operator-scripts\") pod \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.588962 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7988cea-6aa8-4552-8965-04b417c91831-operator-scripts\") pod \"f7988cea-6aa8-4552-8965-04b417c91831\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.588981 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64cqb\" (UniqueName: \"kubernetes.io/projected/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-kube-api-access-64cqb\") pod \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.590495 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7988cea-6aa8-4552-8965-04b417c91831-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7988cea-6aa8-4552-8965-04b417c91831" (UID: "f7988cea-6aa8-4552-8965-04b417c91831"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.590747 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac9c39c2-bf9e-4f11-b37f-17089fce08e7" (UID: "ac9c39c2-bf9e-4f11-b37f-17089fce08e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.600584 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7988cea-6aa8-4552-8965-04b417c91831-kube-api-access-7khwn" (OuterVolumeSpecName: "kube-api-access-7khwn") pod "f7988cea-6aa8-4552-8965-04b417c91831" (UID: "f7988cea-6aa8-4552-8965-04b417c91831"). InnerVolumeSpecName "kube-api-access-7khwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.608374 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-kube-api-access-64cqb" (OuterVolumeSpecName: "kube-api-access-64cqb") pod "ac9c39c2-bf9e-4f11-b37f-17089fce08e7" (UID: "ac9c39c2-bf9e-4f11-b37f-17089fce08e7"). InnerVolumeSpecName "kube-api-access-64cqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.634389 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"8c9bc1e6b51378c96aa83821fefc43a6edeb3618c4359e3c31206c5b84643c34"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.634431 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"91038b7abd4dc858aba34a9e38d41f405569102d713f7a7fd829187dca7a23ee"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.634439 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"d86295dcb2fcf3c7ce9e6f16518bddc225d30264891a801a9d6ec00b3e315818"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.637842 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a65d-account-create-update-chx2v" event={"ID":"ac9c39c2-bf9e-4f11-b37f-17089fce08e7","Type":"ContainerDied","Data":"51583244c9cd178243965b225a45c783c841e20d65edf2a86b3f9db92e439266"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.637883 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51583244c9cd178243965b225a45c783c841e20d65edf2a86b3f9db92e439266" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.637891 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.641355 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-53f4-account-create-update-mh2bq" event={"ID":"ca3578cc-7bd4-4e77-8b29-bbb38f588260","Type":"ContainerStarted","Data":"4abb236d79cc8592182059a25a1bc35aaa2d4ae1b8716c7469a32147843e50a4"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.641387 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-53f4-account-create-update-mh2bq" event={"ID":"ca3578cc-7bd4-4e77-8b29-bbb38f588260","Type":"ContainerStarted","Data":"43c082521df80e8aaf2a62b7e035919e319670ae64cf37a87221a05f6dd28385"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.678070 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-53f4-account-create-update-mh2bq" podStartSLOduration=2.678055371 podStartE2EDuration="2.678055371s" podCreationTimestamp="2026-02-18 19:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:06.666351052 +0000 UTC m=+1150.248305897" watchObservedRunningTime="2026-02-18 19:53:06.678055371 +0000 UTC m=+1150.260010216" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.690473 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.690498 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7988cea-6aa8-4552-8965-04b417c91831-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.690507 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64cqb\" (UniqueName: \"kubernetes.io/projected/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-kube-api-access-64cqb\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.690518 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7khwn\" (UniqueName: \"kubernetes.io/projected/f7988cea-6aa8-4552-8965-04b417c91831-kube-api-access-7khwn\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.530973 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.531243 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="config-reloader" containerID="cri-o://66d84470994100b42a53acf4561ffbafa4e810bfb2c143ce053c40ae82620693" gracePeriod=600 Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.531615 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="prometheus" containerID="cri-o://df6328f727f0438e57305c9925166837d62f5032c8dd58e3ace63bdd0cdad46f" gracePeriod=600 Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.531665 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="thanos-sidecar" containerID="cri-o://1898dd90c7cb5f44526cee3dcba285d60ab2aa3db3c6ae91c6ffaee8a1e5c768" gracePeriod=600 Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.657653 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"cfdeeebc285591118a23dbf8cae9a08259e2f51ba2d3126ba0de8a1ab322026f"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.657882 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"fdd0e6806eeeccec58d225097203f7c9f01ff95648d4c9810b6d49398427d4ec"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.661003 4932 generic.go:334] "Generic (PLEG): container finished" podID="ca3578cc-7bd4-4e77-8b29-bbb38f588260" containerID="4abb236d79cc8592182059a25a1bc35aaa2d4ae1b8716c7469a32147843e50a4" exitCode=0 Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.661400 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-53f4-account-create-update-mh2bq" event={"ID":"ca3578cc-7bd4-4e77-8b29-bbb38f588260","Type":"ContainerDied","Data":"4abb236d79cc8592182059a25a1bc35aaa2d4ae1b8716c7469a32147843e50a4"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.664585 4932 generic.go:334] "Generic (PLEG): container finished" podID="0b9deee6-7804-492e-88c9-147087152416" containerID="2a62cc7c92b0f61fc993f04377e1428679cd22afc955b4da72b0e6e2d00eb682" exitCode=0 Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.664654 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hbs76" event={"ID":"0b9deee6-7804-492e-88c9-147087152416","Type":"ContainerDied","Data":"2a62cc7c92b0f61fc993f04377e1428679cd22afc955b4da72b0e6e2d00eb682"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.715165 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=45.081934784 podStartE2EDuration="50.715145728s" podCreationTimestamp="2026-02-18 19:52:17 +0000 UTC" firstStartedPulling="2026-02-18 19:52:59.146410039 +0000 UTC m=+1142.728364874" lastFinishedPulling="2026-02-18 19:53:04.779620973 +0000 UTC m=+1148.361575818" observedRunningTime="2026-02-18 19:53:07.706896115 +0000 UTC m=+1151.288850960" watchObservedRunningTime="2026-02-18 19:53:07.715145728 +0000 UTC m=+1151.297100573" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.959695 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8465d7b6c9-sv9w5"] Feb 18 19:53:07 crc kubenswrapper[4932]: E0218 19:53:07.960307 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7680bf6b-efd6-452a-8900-09cf55b203ff" containerName="mariadb-account-create-update" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.960381 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7680bf6b-efd6-452a-8900-09cf55b203ff" containerName="mariadb-account-create-update" Feb 18 19:53:07 crc kubenswrapper[4932]: E0218 19:53:07.960457 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9c39c2-bf9e-4f11-b37f-17089fce08e7" containerName="mariadb-account-create-update" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.960506 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9c39c2-bf9e-4f11-b37f-17089fce08e7" containerName="mariadb-account-create-update" Feb 18 19:53:07 crc kubenswrapper[4932]: E0218 19:53:07.960569 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56734660-55cc-463c-89f2-131bc9109dab" containerName="mariadb-database-create" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.960613 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="56734660-55cc-463c-89f2-131bc9109dab" containerName="mariadb-database-create" Feb 18 19:53:07 crc kubenswrapper[4932]: E0218 19:53:07.960791 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7988cea-6aa8-4552-8965-04b417c91831" containerName="mariadb-database-create" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.960842 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7988cea-6aa8-4552-8965-04b417c91831" containerName="mariadb-database-create" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.961076 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac9c39c2-bf9e-4f11-b37f-17089fce08e7" containerName="mariadb-account-create-update" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.961224 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7988cea-6aa8-4552-8965-04b417c91831" containerName="mariadb-database-create" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.961314 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7680bf6b-efd6-452a-8900-09cf55b203ff" containerName="mariadb-account-create-update" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.974975 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="56734660-55cc-463c-89f2-131bc9109dab" containerName="mariadb-database-create" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.976424 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8465d7b6c9-sv9w5"] Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.976572 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.979224 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.125528 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-nb\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.125835 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-sb\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.125962 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-svc\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.126094 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-config\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.126195 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-swift-storage-0\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.126318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pljhh\" (UniqueName: \"kubernetes.io/projected/1f7bde87-22e2-49c2-a025-ab8f835dff78-kube-api-access-pljhh\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.227841 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-svc\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.228101 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-config\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.228788 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-svc\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.229021 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-config\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.229156 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-swift-storage-0\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.229795 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-swift-storage-0\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.230001 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pljhh\" (UniqueName: \"kubernetes.io/projected/1f7bde87-22e2-49c2-a025-ab8f835dff78-kube-api-access-pljhh\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.230413 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-nb\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.231083 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-sb\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.231026 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-nb\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.231671 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-sb\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.264105 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pljhh\" (UniqueName: \"kubernetes.io/projected/1f7bde87-22e2-49c2-a025-ab8f835dff78-kube-api-access-pljhh\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.328553 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.686253 4932 generic.go:334] "Generic (PLEG): container finished" podID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerID="df6328f727f0438e57305c9925166837d62f5032c8dd58e3ace63bdd0cdad46f" exitCode=0 Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.686304 4932 generic.go:334] "Generic (PLEG): container finished" podID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerID="1898dd90c7cb5f44526cee3dcba285d60ab2aa3db3c6ae91c6ffaee8a1e5c768" exitCode=0 Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.686312 4932 generic.go:334] "Generic (PLEG): container finished" podID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerID="66d84470994100b42a53acf4561ffbafa4e810bfb2c143ce053c40ae82620693" exitCode=0 Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.686299 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerDied","Data":"df6328f727f0438e57305c9925166837d62f5032c8dd58e3ace63bdd0cdad46f"} Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.686362 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerDied","Data":"1898dd90c7cb5f44526cee3dcba285d60ab2aa3db3c6ae91c6ffaee8a1e5c768"} Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.686380 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerDied","Data":"66d84470994100b42a53acf4561ffbafa4e810bfb2c143ce053c40ae82620693"} Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.475661 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.490574 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.495336 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.495773 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600495 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600542 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config-out\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600615 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-2\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600636 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnvgq\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-kube-api-access-cnvgq\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600652 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9deee6-7804-492e-88c9-147087152416-operator-scripts\") pod \"0b9deee6-7804-492e-88c9-147087152416\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600685 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-thanos-prometheus-http-client-file\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600738 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-1\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600759 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-web-config\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600774 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-0\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600829 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca3578cc-7bd4-4e77-8b29-bbb38f588260-operator-scripts\") pod \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600861 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-tls-assets\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600876 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhrgc\" (UniqueName: \"kubernetes.io/projected/ca3578cc-7bd4-4e77-8b29-bbb38f588260-kube-api-access-xhrgc\") pod \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600906 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600928 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvhxw\" (UniqueName: \"kubernetes.io/projected/0b9deee6-7804-492e-88c9-147087152416-kube-api-access-nvhxw\") pod \"0b9deee6-7804-492e-88c9-147087152416\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.601687 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.602280 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.602412 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b9deee6-7804-492e-88c9-147087152416-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b9deee6-7804-492e-88c9-147087152416" (UID: "0b9deee6-7804-492e-88c9-147087152416"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.602562 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.603434 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca3578cc-7bd4-4e77-8b29-bbb38f588260-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca3578cc-7bd4-4e77-8b29-bbb38f588260" (UID: "ca3578cc-7bd4-4e77-8b29-bbb38f588260"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.611754 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b9deee6-7804-492e-88c9-147087152416-kube-api-access-nvhxw" (OuterVolumeSpecName: "kube-api-access-nvhxw") pod "0b9deee6-7804-492e-88c9-147087152416" (UID: "0b9deee6-7804-492e-88c9-147087152416"). InnerVolumeSpecName "kube-api-access-nvhxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.620556 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca3578cc-7bd4-4e77-8b29-bbb38f588260-kube-api-access-xhrgc" (OuterVolumeSpecName: "kube-api-access-xhrgc") pod "ca3578cc-7bd4-4e77-8b29-bbb38f588260" (UID: "ca3578cc-7bd4-4e77-8b29-bbb38f588260"). InnerVolumeSpecName "kube-api-access-xhrgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.627377 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config" (OuterVolumeSpecName: "config") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.627465 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config-out" (OuterVolumeSpecName: "config-out") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.628037 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.632118 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.647828 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-kube-api-access-cnvgq" (OuterVolumeSpecName: "kube-api-access-cnvgq") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "kube-api-access-cnvgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.648600 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.651315 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-web-config" (OuterVolumeSpecName: "web-config") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702491 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702545 4932 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702565 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702584 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca3578cc-7bd4-4e77-8b29-bbb38f588260-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702599 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhrgc\" (UniqueName: \"kubernetes.io/projected/ca3578cc-7bd4-4e77-8b29-bbb38f588260-kube-api-access-xhrgc\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702611 4932 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702623 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702638 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvhxw\" (UniqueName: \"kubernetes.io/projected/0b9deee6-7804-492e-88c9-147087152416-kube-api-access-nvhxw\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702695 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") on node \"crc\" " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702717 4932 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702734 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702752 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnvgq\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-kube-api-access-cnvgq\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702769 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9deee6-7804-492e-88c9-147087152416-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702785 4932 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.716906 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-53f4-account-create-update-mh2bq" event={"ID":"ca3578cc-7bd4-4e77-8b29-bbb38f588260","Type":"ContainerDied","Data":"43c082521df80e8aaf2a62b7e035919e319670ae64cf37a87221a05f6dd28385"} Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.716962 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43c082521df80e8aaf2a62b7e035919e319670ae64cf37a87221a05f6dd28385" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.717039 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.719028 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hbs76" event={"ID":"0b9deee6-7804-492e-88c9-147087152416","Type":"ContainerDied","Data":"1ad029f86ec5f0d0a06dcaedd3663dce48eadadc46ce509227278c4f119388a2"} Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.719055 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ad029f86ec5f0d0a06dcaedd3663dce48eadadc46ce509227278c4f119388a2" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.719083 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.720282 4932 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.720422 4932 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69") on node "crc" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.721283 4932 generic.go:334] "Generic (PLEG): container finished" podID="1bbf2873-6ca9-4569-b5b6-3003511c02ba" containerID="104353923ef97f2e6933dbfcfbfc2a9125473f1373667e2eb5163afb4316da88" exitCode=0 Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.721357 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rl7xx" event={"ID":"1bbf2873-6ca9-4569-b5b6-3003511c02ba","Type":"ContainerDied","Data":"104353923ef97f2e6933dbfcfbfc2a9125473f1373667e2eb5163afb4316da88"} Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.724916 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerDied","Data":"c079ef0a75a184583fc3bcc63484ddbcd7e9466dbb03675318140b785c3f7c07"} Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.724952 4932 scope.go:117] "RemoveContainer" containerID="df6328f727f0438e57305c9925166837d62f5032c8dd58e3ace63bdd0cdad46f" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.725003 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.780962 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.799312 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.804577 4932 reconciler_common.go:293] "Volume detached for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806110 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:53:11 crc kubenswrapper[4932]: E0218 19:53:11.806514 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="init-config-reloader" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806528 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="init-config-reloader" Feb 18 19:53:11 crc kubenswrapper[4932]: E0218 19:53:11.806545 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="config-reloader" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806553 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="config-reloader" Feb 18 19:53:11 crc kubenswrapper[4932]: E0218 19:53:11.806570 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="prometheus" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806578 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="prometheus" Feb 18 19:53:11 crc kubenswrapper[4932]: E0218 19:53:11.806598 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="thanos-sidecar" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806605 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="thanos-sidecar" Feb 18 19:53:11 crc kubenswrapper[4932]: E0218 19:53:11.806621 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b9deee6-7804-492e-88c9-147087152416" containerName="mariadb-database-create" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806628 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9deee6-7804-492e-88c9-147087152416" containerName="mariadb-database-create" Feb 18 19:53:11 crc kubenswrapper[4932]: E0218 19:53:11.806650 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3578cc-7bd4-4e77-8b29-bbb38f588260" containerName="mariadb-account-create-update" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806659 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3578cc-7bd4-4e77-8b29-bbb38f588260" containerName="mariadb-account-create-update" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806848 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="config-reloader" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806895 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3578cc-7bd4-4e77-8b29-bbb38f588260" containerName="mariadb-account-create-update" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806919 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="prometheus" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806936 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="thanos-sidecar" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806956 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b9deee6-7804-492e-88c9-147087152416" containerName="mariadb-database-create" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.808806 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.811147 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-5jcnf" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.811502 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.811641 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.812784 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.812942 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.813126 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.814024 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.814090 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.831385 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.834011 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906080 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906358 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906379 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-config\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906407 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906438 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906484 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906505 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnmwk\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-kube-api-access-fnmwk\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906522 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906554 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f1783f11-a79f-49d9-a637-224863cdb0ad-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906587 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906607 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906650 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008187 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008259 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008283 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnmwk\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-kube-api-access-fnmwk\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008298 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008332 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f1783f11-a79f-49d9-a637-224863cdb0ad-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008357 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008374 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008409 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008448 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008479 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008506 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008547 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-config\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008606 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.012990 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.013037 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.014554 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.014709 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.015344 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.015351 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.015746 4932 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.015793 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-config\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.015793 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e039419306e79ade7652e80c67474011a5658585fd3b39d0b236ffa94ab5d0db/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.017779 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f1783f11-a79f-49d9-a637-224863cdb0ad-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.019636 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.023891 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.027261 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.032393 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnmwk\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-kube-api-access-fnmwk\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.071455 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.140866 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:13 crc kubenswrapper[4932]: I0218 19:53:13.189903 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" path="/var/lib/kubelet/pods/cf98dd42-289f-43fa-b4dc-c6ff814a3c25/volumes" Feb 18 19:53:15 crc kubenswrapper[4932]: I0218 19:53:15.999008 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rl7xx" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.080793 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-combined-ca-bundle\") pod \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.081006 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlv4k\" (UniqueName: \"kubernetes.io/projected/1bbf2873-6ca9-4569-b5b6-3003511c02ba-kube-api-access-qlv4k\") pod \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.081065 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-config-data\") pod \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.081137 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-db-sync-config-data\") pod \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.089363 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1bbf2873-6ca9-4569-b5b6-3003511c02ba" (UID: "1bbf2873-6ca9-4569-b5b6-3003511c02ba"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.090350 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bbf2873-6ca9-4569-b5b6-3003511c02ba-kube-api-access-qlv4k" (OuterVolumeSpecName: "kube-api-access-qlv4k") pod "1bbf2873-6ca9-4569-b5b6-3003511c02ba" (UID: "1bbf2873-6ca9-4569-b5b6-3003511c02ba"). InnerVolumeSpecName "kube-api-access-qlv4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.108058 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bbf2873-6ca9-4569-b5b6-3003511c02ba" (UID: "1bbf2873-6ca9-4569-b5b6-3003511c02ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.134983 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-config-data" (OuterVolumeSpecName: "config-data") pod "1bbf2873-6ca9-4569-b5b6-3003511c02ba" (UID: "1bbf2873-6ca9-4569-b5b6-3003511c02ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.184003 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.184035 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlv4k\" (UniqueName: \"kubernetes.io/projected/1bbf2873-6ca9-4569-b5b6-3003511c02ba-kube-api-access-qlv4k\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.184046 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.184055 4932 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.480889 4932 scope.go:117] "RemoveContainer" containerID="1898dd90c7cb5f44526cee3dcba285d60ab2aa3db3c6ae91c6ffaee8a1e5c768" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.570700 4932 scope.go:117] "RemoveContainer" containerID="66d84470994100b42a53acf4561ffbafa4e810bfb2c143ce053c40ae82620693" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.594755 4932 scope.go:117] "RemoveContainer" containerID="e180b06fd671083f79001b0061a303617d1566914909b796e6dc37109bc742cf" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.774857 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h526s" event={"ID":"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a","Type":"ContainerStarted","Data":"2dcf1d051e29c868ab7c7db13dbafa7710ab23c52dd39329f8dbfbb2b5ea9459"} Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.784576 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rl7xx" event={"ID":"1bbf2873-6ca9-4569-b5b6-3003511c02ba","Type":"ContainerDied","Data":"f068e85210ddcd828af2d489d54882cc64dba8b583684c4c6f7597bf8f804826"} Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.784612 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rl7xx" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.784621 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f068e85210ddcd828af2d489d54882cc64dba8b583684c4c6f7597bf8f804826" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.802966 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-h526s" podStartSLOduration=1.4788571529999999 podStartE2EDuration="14.802936208s" podCreationTimestamp="2026-02-18 19:53:02 +0000 UTC" firstStartedPulling="2026-02-18 19:53:03.19732856 +0000 UTC m=+1146.779283405" lastFinishedPulling="2026-02-18 19:53:16.521407605 +0000 UTC m=+1160.103362460" observedRunningTime="2026-02-18 19:53:16.790519182 +0000 UTC m=+1160.372474027" watchObservedRunningTime="2026-02-18 19:53:16.802936208 +0000 UTC m=+1160.384891083" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.903321 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8465d7b6c9-sv9w5"] Feb 18 19:53:16 crc kubenswrapper[4932]: W0218 19:53:16.914604 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f7bde87_22e2_49c2_a025_ab8f835dff78.slice/crio-54e67ba1a029ebbf1ee8283379add136ec5a0898711c3c99d70bda3b8789460d WatchSource:0}: Error finding container 54e67ba1a029ebbf1ee8283379add136ec5a0898711c3c99d70bda3b8789460d: Status 404 returned error can't find the container with id 54e67ba1a029ebbf1ee8283379add136ec5a0898711c3c99d70bda3b8789460d Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.044232 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.465092 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8465d7b6c9-sv9w5"] Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.501917 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8f475786f-6jkn9"] Feb 18 19:53:17 crc kubenswrapper[4932]: E0218 19:53:17.502289 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbf2873-6ca9-4569-b5b6-3003511c02ba" containerName="glance-db-sync" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.502304 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbf2873-6ca9-4569-b5b6-3003511c02ba" containerName="glance-db-sync" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.502479 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbf2873-6ca9-4569-b5b6-3003511c02ba" containerName="glance-db-sync" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.503278 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.530517 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8f475786f-6jkn9"] Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.620223 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-nb\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.620292 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-swift-storage-0\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.620399 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-config\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.620423 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-svc\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.620495 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-sb\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.620618 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kb4v\" (UniqueName: \"kubernetes.io/projected/81c5b019-830a-45a5-b05e-22f7aa7e41c7-kube-api-access-6kb4v\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.721602 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-nb\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.721654 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-swift-storage-0\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.721689 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-config\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.721706 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-svc\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.721738 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-sb\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.721796 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kb4v\" (UniqueName: \"kubernetes.io/projected/81c5b019-830a-45a5-b05e-22f7aa7e41c7-kube-api-access-6kb4v\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.722866 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-nb\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.722894 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-swift-storage-0\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.723439 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-config\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.723730 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-svc\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.723960 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-sb\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.742906 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kb4v\" (UniqueName: \"kubernetes.io/projected/81c5b019-830a-45a5-b05e-22f7aa7e41c7-kube-api-access-6kb4v\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.798947 4932 generic.go:334] "Generic (PLEG): container finished" podID="1f7bde87-22e2-49c2-a025-ab8f835dff78" containerID="8f1f6c9d991aa15d42b648ceda3a3c5f26b30bbb576f481c856678b1bf62a3ab" exitCode=0 Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.799044 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" event={"ID":"1f7bde87-22e2-49c2-a025-ab8f835dff78","Type":"ContainerDied","Data":"8f1f6c9d991aa15d42b648ceda3a3c5f26b30bbb576f481c856678b1bf62a3ab"} Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.800066 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" event={"ID":"1f7bde87-22e2-49c2-a025-ab8f835dff78","Type":"ContainerStarted","Data":"54e67ba1a029ebbf1ee8283379add136ec5a0898711c3c99d70bda3b8789460d"} Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.813838 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerStarted","Data":"517321ee2b5c108f37907af390aff2f58338e81a6d4f29d0b1fb1230f8840a63"} Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.817734 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.820216 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4ghxf" event={"ID":"bc05154b-7f25-4fb1-8293-9aba06523c37","Type":"ContainerStarted","Data":"e723a55a533327bae796eda64399cc0b1ee1750e65068515a7e5625e2f091ec4"} Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.846058 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-4ghxf" podStartSLOduration=3.444492023 podStartE2EDuration="13.846037493s" podCreationTimestamp="2026-02-18 19:53:04 +0000 UTC" firstStartedPulling="2026-02-18 19:53:06.193275093 +0000 UTC m=+1149.775229938" lastFinishedPulling="2026-02-18 19:53:16.594820563 +0000 UTC m=+1160.176775408" observedRunningTime="2026-02-18 19:53:17.843263865 +0000 UTC m=+1161.425218730" watchObservedRunningTime="2026-02-18 19:53:17.846037493 +0000 UTC m=+1161.427992338" Feb 18 19:53:18 crc kubenswrapper[4932]: E0218 19:53:18.139112 4932 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 18 19:53:18 crc kubenswrapper[4932]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/1f7bde87-22e2-49c2-a025-ab8f835dff78/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 18 19:53:18 crc kubenswrapper[4932]: > podSandboxID="54e67ba1a029ebbf1ee8283379add136ec5a0898711c3c99d70bda3b8789460d" Feb 18 19:53:18 crc kubenswrapper[4932]: E0218 19:53:18.139556 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 19:53:18 crc kubenswrapper[4932]: container &Container{Name:dnsmasq-dns,Image:38.102.83.58:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncbh5fh58h56bh64bh684h98h654h64bh687h645h54h548h87h59dh56dh655hd9hbfh87h5c9h68bh645h64h8bh8bh585h5bdh55ch546hfbhbdq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pljhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8465d7b6c9-sv9w5_openstack(1f7bde87-22e2-49c2-a025-ab8f835dff78): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/1f7bde87-22e2-49c2-a025-ab8f835dff78/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 18 19:53:18 crc kubenswrapper[4932]: > logger="UnhandledError" Feb 18 19:53:18 crc kubenswrapper[4932]: E0218 19:53:18.141017 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/1f7bde87-22e2-49c2-a025-ab8f835dff78/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" podUID="1f7bde87-22e2-49c2-a025-ab8f835dff78" Feb 18 19:53:18 crc kubenswrapper[4932]: I0218 19:53:18.343956 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8f475786f-6jkn9"] Feb 18 19:53:18 crc kubenswrapper[4932]: W0218 19:53:18.350440 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81c5b019_830a_45a5_b05e_22f7aa7e41c7.slice/crio-40635d0f4580a3bc434b2fd370ad4b541db323a92086fd21f44bf21127a2ea88 WatchSource:0}: Error finding container 40635d0f4580a3bc434b2fd370ad4b541db323a92086fd21f44bf21127a2ea88: Status 404 returned error can't find the container with id 40635d0f4580a3bc434b2fd370ad4b541db323a92086fd21f44bf21127a2ea88 Feb 18 19:53:18 crc kubenswrapper[4932]: I0218 19:53:18.829726 4932 generic.go:334] "Generic (PLEG): container finished" podID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerID="dd9cde3170c69a353ec61b65c4f18cbd1534c0a768ecc45c08a9e745323cd132" exitCode=0 Feb 18 19:53:18 crc kubenswrapper[4932]: I0218 19:53:18.829799 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" event={"ID":"81c5b019-830a-45a5-b05e-22f7aa7e41c7","Type":"ContainerDied","Data":"dd9cde3170c69a353ec61b65c4f18cbd1534c0a768ecc45c08a9e745323cd132"} Feb 18 19:53:18 crc kubenswrapper[4932]: I0218 19:53:18.830037 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" event={"ID":"81c5b019-830a-45a5-b05e-22f7aa7e41c7","Type":"ContainerStarted","Data":"40635d0f4580a3bc434b2fd370ad4b541db323a92086fd21f44bf21127a2ea88"} Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.440919 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.551070 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-svc\") pod \"1f7bde87-22e2-49c2-a025-ab8f835dff78\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.551151 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-swift-storage-0\") pod \"1f7bde87-22e2-49c2-a025-ab8f835dff78\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.551253 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-sb\") pod \"1f7bde87-22e2-49c2-a025-ab8f835dff78\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.551319 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-nb\") pod \"1f7bde87-22e2-49c2-a025-ab8f835dff78\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.551351 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-config\") pod \"1f7bde87-22e2-49c2-a025-ab8f835dff78\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.551761 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pljhh\" (UniqueName: \"kubernetes.io/projected/1f7bde87-22e2-49c2-a025-ab8f835dff78-kube-api-access-pljhh\") pod \"1f7bde87-22e2-49c2-a025-ab8f835dff78\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.558425 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7bde87-22e2-49c2-a025-ab8f835dff78-kube-api-access-pljhh" (OuterVolumeSpecName: "kube-api-access-pljhh") pod "1f7bde87-22e2-49c2-a025-ab8f835dff78" (UID: "1f7bde87-22e2-49c2-a025-ab8f835dff78"). InnerVolumeSpecName "kube-api-access-pljhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.591848 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1f7bde87-22e2-49c2-a025-ab8f835dff78" (UID: "1f7bde87-22e2-49c2-a025-ab8f835dff78"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.597012 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f7bde87-22e2-49c2-a025-ab8f835dff78" (UID: "1f7bde87-22e2-49c2-a025-ab8f835dff78"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.607724 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1f7bde87-22e2-49c2-a025-ab8f835dff78" (UID: "1f7bde87-22e2-49c2-a025-ab8f835dff78"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.610033 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1f7bde87-22e2-49c2-a025-ab8f835dff78" (UID: "1f7bde87-22e2-49c2-a025-ab8f835dff78"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.617436 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-config" (OuterVolumeSpecName: "config") pod "1f7bde87-22e2-49c2-a025-ab8f835dff78" (UID: "1f7bde87-22e2-49c2-a025-ab8f835dff78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.653867 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.653901 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pljhh\" (UniqueName: \"kubernetes.io/projected/1f7bde87-22e2-49c2-a025-ab8f835dff78-kube-api-access-pljhh\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.653912 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.653923 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.653932 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.653940 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.840034 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerStarted","Data":"81f9d76b429826048a1f76e9841d9bd5c8224e1c54ca1834ee1d11eed8e3afa6"} Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.842879 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" event={"ID":"81c5b019-830a-45a5-b05e-22f7aa7e41c7","Type":"ContainerStarted","Data":"ada62a006618a52de9cbdc7e1191675216899513cfdb323ef65c636601133e63"} Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.843438 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.844794 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" event={"ID":"1f7bde87-22e2-49c2-a025-ab8f835dff78","Type":"ContainerDied","Data":"54e67ba1a029ebbf1ee8283379add136ec5a0898711c3c99d70bda3b8789460d"} Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.844843 4932 scope.go:117] "RemoveContainer" containerID="8f1f6c9d991aa15d42b648ceda3a3c5f26b30bbb576f481c856678b1bf62a3ab" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.844848 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.940528 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8465d7b6c9-sv9w5"] Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.950402 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8465d7b6c9-sv9w5"] Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.956446 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" podStartSLOduration=2.95642951 podStartE2EDuration="2.95642951s" podCreationTimestamp="2026-02-18 19:53:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:19.93978339 +0000 UTC m=+1163.521738235" watchObservedRunningTime="2026-02-18 19:53:19.95642951 +0000 UTC m=+1163.538384355" Feb 18 19:53:21 crc kubenswrapper[4932]: I0218 19:53:21.188373 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f7bde87-22e2-49c2-a025-ab8f835dff78" path="/var/lib/kubelet/pods/1f7bde87-22e2-49c2-a025-ab8f835dff78/volumes" Feb 18 19:53:22 crc kubenswrapper[4932]: I0218 19:53:22.874995 4932 generic.go:334] "Generic (PLEG): container finished" podID="bc05154b-7f25-4fb1-8293-9aba06523c37" containerID="e723a55a533327bae796eda64399cc0b1ee1750e65068515a7e5625e2f091ec4" exitCode=0 Feb 18 19:53:22 crc kubenswrapper[4932]: I0218 19:53:22.875084 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4ghxf" event={"ID":"bc05154b-7f25-4fb1-8293-9aba06523c37","Type":"ContainerDied","Data":"e723a55a533327bae796eda64399cc0b1ee1750e65068515a7e5625e2f091ec4"} Feb 18 19:53:23 crc kubenswrapper[4932]: I0218 19:53:23.887398 4932 generic.go:334] "Generic (PLEG): container finished" podID="14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" containerID="2dcf1d051e29c868ab7c7db13dbafa7710ab23c52dd39329f8dbfbb2b5ea9459" exitCode=0 Feb 18 19:53:23 crc kubenswrapper[4932]: I0218 19:53:23.887474 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h526s" event={"ID":"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a","Type":"ContainerDied","Data":"2dcf1d051e29c868ab7c7db13dbafa7710ab23c52dd39329f8dbfbb2b5ea9459"} Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.281542 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.339255 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-db-sync-config-data\") pod \"bc05154b-7f25-4fb1-8293-9aba06523c37\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.339316 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-config-data\") pod \"bc05154b-7f25-4fb1-8293-9aba06523c37\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.339449 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-combined-ca-bundle\") pod \"bc05154b-7f25-4fb1-8293-9aba06523c37\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.339476 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2htl\" (UniqueName: \"kubernetes.io/projected/bc05154b-7f25-4fb1-8293-9aba06523c37-kube-api-access-s2htl\") pod \"bc05154b-7f25-4fb1-8293-9aba06523c37\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.344913 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bc05154b-7f25-4fb1-8293-9aba06523c37" (UID: "bc05154b-7f25-4fb1-8293-9aba06523c37"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.345435 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc05154b-7f25-4fb1-8293-9aba06523c37-kube-api-access-s2htl" (OuterVolumeSpecName: "kube-api-access-s2htl") pod "bc05154b-7f25-4fb1-8293-9aba06523c37" (UID: "bc05154b-7f25-4fb1-8293-9aba06523c37"). InnerVolumeSpecName "kube-api-access-s2htl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.364262 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc05154b-7f25-4fb1-8293-9aba06523c37" (UID: "bc05154b-7f25-4fb1-8293-9aba06523c37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.386115 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-config-data" (OuterVolumeSpecName: "config-data") pod "bc05154b-7f25-4fb1-8293-9aba06523c37" (UID: "bc05154b-7f25-4fb1-8293-9aba06523c37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.440852 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.440895 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2htl\" (UniqueName: \"kubernetes.io/projected/bc05154b-7f25-4fb1-8293-9aba06523c37-kube-api-access-s2htl\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.440910 4932 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.440918 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.898705 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4ghxf" event={"ID":"bc05154b-7f25-4fb1-8293-9aba06523c37","Type":"ContainerDied","Data":"15675a36136757370796fc216004ca775eac02c9effd24c85dfe90820b2828ae"} Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.898770 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15675a36136757370796fc216004ca775eac02c9effd24c85dfe90820b2828ae" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.898728 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.232839 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.361235 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5sds\" (UniqueName: \"kubernetes.io/projected/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-kube-api-access-r5sds\") pod \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.361367 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-config-data\") pod \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.361480 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-combined-ca-bundle\") pod \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.376024 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-kube-api-access-r5sds" (OuterVolumeSpecName: "kube-api-access-r5sds") pod "14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" (UID: "14c3aa11-529c-423d-bb7d-30fd0d5a3e7a"). InnerVolumeSpecName "kube-api-access-r5sds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.388840 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" (UID: "14c3aa11-529c-423d-bb7d-30fd0d5a3e7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.404430 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-config-data" (OuterVolumeSpecName: "config-data") pod "14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" (UID: "14c3aa11-529c-423d-bb7d-30fd0d5a3e7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.463405 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.463435 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5sds\" (UniqueName: \"kubernetes.io/projected/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-kube-api-access-r5sds\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.463447 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.912158 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h526s" event={"ID":"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a","Type":"ContainerDied","Data":"43fb9c3d5607cbfcdb2e71bfe2fe586c4b79577442d0345f45f7e3f3cb5eb6e7"} Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.912224 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43fb9c3d5607cbfcdb2e71bfe2fe586c4b79577442d0345f45f7e3f3cb5eb6e7" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.912278 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.156396 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pchf4"] Feb 18 19:53:26 crc kubenswrapper[4932]: E0218 19:53:26.156725 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7bde87-22e2-49c2-a025-ab8f835dff78" containerName="init" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.156743 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7bde87-22e2-49c2-a025-ab8f835dff78" containerName="init" Feb 18 19:53:26 crc kubenswrapper[4932]: E0218 19:53:26.156781 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" containerName="keystone-db-sync" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.156790 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" containerName="keystone-db-sync" Feb 18 19:53:26 crc kubenswrapper[4932]: E0218 19:53:26.156811 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc05154b-7f25-4fb1-8293-9aba06523c37" containerName="watcher-db-sync" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.156818 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc05154b-7f25-4fb1-8293-9aba06523c37" containerName="watcher-db-sync" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.156972 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7bde87-22e2-49c2-a025-ab8f835dff78" containerName="init" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.156987 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" containerName="keystone-db-sync" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.156999 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc05154b-7f25-4fb1-8293-9aba06523c37" containerName="watcher-db-sync" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.157501 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.167577 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.167875 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.168047 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.168158 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sk7x7" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.178277 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.190029 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pchf4"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.370238 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8f475786f-6jkn9"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.370538 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerName="dnsmasq-dns" containerID="cri-o://ada62a006618a52de9cbdc7e1191675216899513cfdb323ef65c636601133e63" gracePeriod=10 Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.374447 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.381064 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-credential-keys\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.381142 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-fernet-keys\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.381206 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-scripts\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.381255 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-combined-ca-bundle\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.381300 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9ghl\" (UniqueName: \"kubernetes.io/projected/c63ad2af-4b3b-4aa5-a300-06aadeef8149-kube-api-access-c9ghl\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.381352 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-config-data\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.385815 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.386867 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.396280 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.396524 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-s5bnj" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.444520 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.445919 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.473258 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.480753 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.485777 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96fe12c6-435c-4ef9-a340-c15cd050d898-logs\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.485855 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-credential-keys\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.485884 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv8fx\" (UniqueName: \"kubernetes.io/projected/96fe12c6-435c-4ef9-a340-c15cd050d898-kube-api-access-wv8fx\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.485907 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-fernet-keys\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.485939 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-scripts\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.485971 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-combined-ca-bundle\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.486012 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9ghl\" (UniqueName: \"kubernetes.io/projected/c63ad2af-4b3b-4aa5-a300-06aadeef8149-kube-api-access-c9ghl\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.486044 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-config-data\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.486079 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-config-data\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.486111 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.486130 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.501716 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-credential-keys\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.502521 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-combined-ca-bundle\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.503321 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-scripts\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.505779 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-config-data\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.508003 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-fernet-keys\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.534752 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9ghl\" (UniqueName: \"kubernetes.io/projected/c63ad2af-4b3b-4aa5-a300-06aadeef8149-kube-api-access-c9ghl\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.544190 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6644fc979c-bjpxl"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.545556 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.581242 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589284 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrlj7\" (UniqueName: \"kubernetes.io/projected/efabc52d-6f3c-4442-9b80-09577d6d5ed7-kube-api-access-nrlj7\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589337 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-config-data\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589363 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589381 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589396 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589411 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-config-data\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589473 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96fe12c6-435c-4ef9-a340-c15cd050d898-logs\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589510 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv8fx\" (UniqueName: \"kubernetes.io/projected/96fe12c6-435c-4ef9-a340-c15cd050d898-kube-api-access-wv8fx\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589530 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589564 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efabc52d-6f3c-4442-9b80-09577d6d5ed7-logs\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.606001 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96fe12c6-435c-4ef9-a340-c15cd050d898-logs\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.616413 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-config-data\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.617811 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.624869 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.632105 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.642502 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.657238 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6644fc979c-bjpxl"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693585 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-nb\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693622 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-swift-storage-0\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693663 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693698 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efabc52d-6f3c-4442-9b80-09577d6d5ed7-logs\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693721 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-svc\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693742 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z8bq\" (UniqueName: \"kubernetes.io/projected/9d8f2367-684b-453b-bd7a-4d93e021885c-kube-api-access-8z8bq\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693769 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrlj7\" (UniqueName: \"kubernetes.io/projected/efabc52d-6f3c-4442-9b80-09577d6d5ed7-kube-api-access-nrlj7\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693784 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-config\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693800 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-sb\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693829 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693846 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-config-data\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.694766 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv8fx\" (UniqueName: \"kubernetes.io/projected/96fe12c6-435c-4ef9-a340-c15cd050d898-kube-api-access-wv8fx\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.695366 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.695684 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efabc52d-6f3c-4442-9b80-09577d6d5ed7-logs\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.700658 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.709726 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.717644 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.718367 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-config-data\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.731097 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.741507 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67874d8bd5-ff7xc"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.743124 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.747953 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.748156 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.748333 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.771304 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-x77d8" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.773887 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kfzmp"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.775356 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.787549 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795063 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-svc\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795105 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z8bq\" (UniqueName: \"kubernetes.io/projected/9d8f2367-684b-453b-bd7a-4d93e021885c-kube-api-access-8z8bq\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795136 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795165 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-config\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795272 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-sb\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795329 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-config-data\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795349 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd90883-79db-4903-87ab-828b9608f9fa-logs\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795384 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkk7w\" (UniqueName: \"kubernetes.io/projected/5bd90883-79db-4903-87ab-828b9608f9fa-kube-api-access-jkk7w\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795413 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-swift-storage-0\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795431 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-nb\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.796385 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-nb\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.796940 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-svc\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.797718 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-config\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.798792 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrlj7\" (UniqueName: \"kubernetes.io/projected/efabc52d-6f3c-4442-9b80-09577d6d5ed7-kube-api-access-nrlj7\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.806479 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-swift-storage-0\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.814537 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-sb\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.817184 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67874d8bd5-ff7xc"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.836409 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.836548 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.836642 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rp826" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.866202 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z8bq\" (UniqueName: \"kubernetes.io/projected/9d8f2367-684b-453b-bd7a-4d93e021885c-kube-api-access-8z8bq\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.885478 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kfzmp"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898133 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-scripts\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898197 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-config-data\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898228 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898287 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfb47\" (UniqueName: \"kubernetes.io/projected/a620c48b-58fa-487f-8997-e2784ddc497b-kube-api-access-lfb47\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898320 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a620c48b-58fa-487f-8997-e2784ddc497b-horizon-secret-key\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898338 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a620c48b-58fa-487f-8997-e2784ddc497b-logs\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898363 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-config\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898397 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-combined-ca-bundle\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898438 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-config-data\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898458 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd90883-79db-4903-87ab-828b9608f9fa-logs\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898476 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpwbx\" (UniqueName: \"kubernetes.io/projected/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-kube-api-access-mpwbx\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898514 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkk7w\" (UniqueName: \"kubernetes.io/projected/5bd90883-79db-4903-87ab-828b9608f9fa-kube-api-access-jkk7w\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.904233 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd90883-79db-4903-87ab-828b9608f9fa-logs\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.907099 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-config-data\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.915863 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.929880 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkk7w\" (UniqueName: \"kubernetes.io/projected/5bd90883-79db-4903-87ab-828b9608f9fa-kube-api-access-jkk7w\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.964653 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-nqxxn"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.965703 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.973337 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-sd6v9" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.974793 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.975001 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.989637 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b4cfbdb9c-hwmr5"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.991180 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.992672 4932 generic.go:334] "Generic (PLEG): container finished" podID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerID="ada62a006618a52de9cbdc7e1191675216899513cfdb323ef65c636601133e63" exitCode=0 Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.992745 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" event={"ID":"81c5b019-830a-45a5-b05e-22f7aa7e41c7","Type":"ContainerDied","Data":"ada62a006618a52de9cbdc7e1191675216899513cfdb323ef65c636601133e63"} Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001021 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-scripts\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001064 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-config-data\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001110 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfb47\" (UniqueName: \"kubernetes.io/projected/a620c48b-58fa-487f-8997-e2784ddc497b-kube-api-access-lfb47\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001131 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a620c48b-58fa-487f-8997-e2784ddc497b-horizon-secret-key\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001148 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a620c48b-58fa-487f-8997-e2784ddc497b-logs\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001184 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-config\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001213 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-combined-ca-bundle\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001251 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpwbx\" (UniqueName: \"kubernetes.io/projected/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-kube-api-access-mpwbx\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.002389 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a620c48b-58fa-487f-8997-e2784ddc497b-logs\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.003109 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-config-data\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.003543 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-scripts\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.010847 4932 generic.go:334] "Generic (PLEG): container finished" podID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerID="81f9d76b429826048a1f76e9841d9bd5c8224e1c54ca1834ee1d11eed8e3afa6" exitCode=0 Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.010883 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerDied","Data":"81f9d76b429826048a1f76e9841d9bd5c8224e1c54ca1834ee1d11eed8e3afa6"} Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.013525 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-combined-ca-bundle\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.016848 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a620c48b-58fa-487f-8997-e2784ddc497b-horizon-secret-key\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.026445 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-config\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.030364 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfb47\" (UniqueName: \"kubernetes.io/projected/a620c48b-58fa-487f-8997-e2784ddc497b-kube-api-access-lfb47\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.034407 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b4cfbdb9c-hwmr5"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.049991 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpwbx\" (UniqueName: \"kubernetes.io/projected/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-kube-api-access-mpwbx\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.078492 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.079057 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nqxxn"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.106657 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-combined-ca-bundle\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.106817 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-config-data\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.106890 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-scripts\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107014 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rph76\" (UniqueName: \"kubernetes.io/projected/4938c577-60aa-45c3-9190-b6e82bcf8b0d-kube-api-access-rph76\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107037 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqllv\" (UniqueName: \"kubernetes.io/projected/3f831817-b833-4ee3-b1e9-77d9c02416ed-kube-api-access-qqllv\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107099 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-db-sync-config-data\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107122 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-config-data\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107154 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4938c577-60aa-45c3-9190-b6e82bcf8b0d-horizon-secret-key\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107232 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f831817-b833-4ee3-b1e9-77d9c02416ed-etc-machine-id\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107388 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4938c577-60aa-45c3-9190-b6e82bcf8b0d-logs\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107570 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-scripts\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.151586 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-df7zx"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.152718 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.156586 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-s8zmw" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.156766 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.156924 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.179649 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-cpzcj"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.180915 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.210874 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rph76\" (UniqueName: \"kubernetes.io/projected/4938c577-60aa-45c3-9190-b6e82bcf8b0d-kube-api-access-rph76\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.210908 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqllv\" (UniqueName: \"kubernetes.io/projected/3f831817-b833-4ee3-b1e9-77d9c02416ed-kube-api-access-qqllv\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.210945 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-db-sync-config-data\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.210977 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-config-data\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.210998 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4938c577-60aa-45c3-9190-b6e82bcf8b0d-horizon-secret-key\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.211021 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f831817-b833-4ee3-b1e9-77d9c02416ed-etc-machine-id\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.211095 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4938c577-60aa-45c3-9190-b6e82bcf8b0d-logs\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.211111 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-scripts\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.211234 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-combined-ca-bundle\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.211264 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-config-data\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.211285 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-scripts\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.212081 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-scripts\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.217549 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-config-data\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.218018 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f831817-b833-4ee3-b1e9-77d9c02416ed-etc-machine-id\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.218620 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.218714 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4938c577-60aa-45c3-9190-b6e82bcf8b0d-logs\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.219250 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sxgcc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.252046 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4938c577-60aa-45c3-9190-b6e82bcf8b0d-horizon-secret-key\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.278996 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-scripts\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.280023 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-db-sync-config-data\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.280821 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-combined-ca-bundle\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.286020 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-config-data\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.301523 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rph76\" (UniqueName: \"kubernetes.io/projected/4938c577-60aa-45c3-9190-b6e82bcf8b0d-kube-api-access-rph76\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314430 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-db-sync-config-data\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314488 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-combined-ca-bundle\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314514 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zn59\" (UniqueName: \"kubernetes.io/projected/43f771cb-173f-4939-b1d1-e7d1b21834cb-kube-api-access-4zn59\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314576 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-scripts\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314638 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-combined-ca-bundle\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314691 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30efc86e-0c26-42e4-b907-1d4d985912ed-logs\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314735 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-config-data\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314882 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l4dr\" (UniqueName: \"kubernetes.io/projected/30efc86e-0c26-42e4-b907-1d4d985912ed-kube-api-access-5l4dr\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.330456 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqllv\" (UniqueName: \"kubernetes.io/projected/3f831817-b833-4ee3-b1e9-77d9c02416ed-kube-api-access-qqllv\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.373221 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6644fc979c-bjpxl"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.373658 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-df7zx"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.373700 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cpzcj"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.373762 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b855db8f7-mh8jh"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.385770 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b855db8f7-mh8jh"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.385867 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.417044 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-nb\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.417081 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.417092 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-scripts\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.417777 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-config\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.417860 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-combined-ca-bundle\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.417954 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-swift-storage-0\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.417991 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30efc86e-0c26-42e4-b907-1d4d985912ed-logs\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418029 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d668s\" (UniqueName: \"kubernetes.io/projected/0affb7f8-ebd4-4d8d-b41c-dd968316038d-kube-api-access-d668s\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418067 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-config-data\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418124 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l4dr\" (UniqueName: \"kubernetes.io/projected/30efc86e-0c26-42e4-b907-1d4d985912ed-kube-api-access-5l4dr\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418160 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-sb\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418281 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-db-sync-config-data\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418342 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-combined-ca-bundle\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418386 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zn59\" (UniqueName: \"kubernetes.io/projected/43f771cb-173f-4939-b1d1-e7d1b21834cb-kube-api-access-4zn59\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418413 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-svc\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418942 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30efc86e-0c26-42e4-b907-1d4d985912ed-logs\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.420031 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-scripts\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.422462 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-combined-ca-bundle\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.424372 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-config-data\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.428258 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.429932 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.439398 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.441552 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.445396 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-db-sync-config-data\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.446062 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-combined-ca-bundle\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.446245 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.446681 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.446873 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.447096 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mx5f7" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.447480 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.448074 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.449057 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.454504 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l4dr\" (UniqueName: \"kubernetes.io/projected/30efc86e-0c26-42e4-b907-1d4d985912ed-kube-api-access-5l4dr\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.463010 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.467588 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zn59\" (UniqueName: \"kubernetes.io/projected/43f771cb-173f-4939-b1d1-e7d1b21834cb-kube-api-access-4zn59\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.477584 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.486236 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.497431 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.522529 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-nb\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.522643 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-scripts\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.522709 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.522877 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-config\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.522990 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z4dm\" (UniqueName: \"kubernetes.io/projected/a956ae21-8721-4f0a-815f-acb82958ec28-kube-api-access-6z4dm\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523080 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-swift-storage-0\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523147 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-config-data\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523256 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d668s\" (UniqueName: \"kubernetes.io/projected/0affb7f8-ebd4-4d8d-b41c-dd968316038d-kube-api-access-d668s\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-logs\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523414 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523493 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523555 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-sb\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523647 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-scripts\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523738 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgvx4\" (UniqueName: \"kubernetes.io/projected/079e3d7d-bd4f-4198-8606-95192a514c07-kube-api-access-xgvx4\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523824 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523897 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-run-httpd\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523958 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-config-data\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.524060 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-log-httpd\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.524150 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-svc\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.524228 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.524363 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.527269 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-nb\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.528890 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-swift-storage-0\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.528954 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-config\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.529614 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-sb\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.534190 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-svc\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.546032 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d668s\" (UniqueName: \"kubernetes.io/projected/0affb7f8-ebd4-4d8d-b41c-dd968316038d-kube-api-access-d668s\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.627971 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-run-httpd\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628013 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-config-data\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628030 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-log-httpd\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628052 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628088 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628111 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-scripts\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628129 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628193 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z4dm\" (UniqueName: \"kubernetes.io/projected/a956ae21-8721-4f0a-815f-acb82958ec28-kube-api-access-6z4dm\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628220 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-config-data\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628249 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-logs\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628269 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628291 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628318 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-scripts\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628336 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgvx4\" (UniqueName: \"kubernetes.io/projected/079e3d7d-bd4f-4198-8606-95192a514c07-kube-api-access-xgvx4\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628364 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.629800 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-run-httpd\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.630280 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.630757 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.631355 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-log-httpd\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.633577 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-logs\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.654294 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.656311 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-config-data\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.674875 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-scripts\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.686035 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.686450 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-config-data\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.686481 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.686552 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.687622 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-scripts\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.690728 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgvx4\" (UniqueName: \"kubernetes.io/projected/079e3d7d-bd4f-4198-8606-95192a514c07-kube-api-access-xgvx4\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.708112 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z4dm\" (UniqueName: \"kubernetes.io/projected/a956ae21-8721-4f0a-815f-acb82958ec28-kube-api-access-6z4dm\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.713733 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: W0218 19:53:27.785995 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc63ad2af_4b3b_4aa5_a300_06aadeef8149.slice/crio-cac15c7f14ab1bd067bcfa006814fe217ca7694f3478219da3114178dc6d8dae WatchSource:0}: Error finding container cac15c7f14ab1bd067bcfa006814fe217ca7694f3478219da3114178dc6d8dae: Status 404 returned error can't find the container with id cac15c7f14ab1bd067bcfa006814fe217ca7694f3478219da3114178dc6d8dae Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.817666 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pchf4"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.836953 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.837939 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.876230 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.878641 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.880333 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.903517 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.904985 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.905357 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.930120 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.933162 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954390 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954529 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954640 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954660 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954720 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d7ll\" (UniqueName: \"kubernetes.io/projected/4ef7f755-fa76-4e5c-8689-06727a6a9204-kube-api-access-6d7ll\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954740 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954779 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954796 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.967319 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.977046 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.989659 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:27.991840 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.020352 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.046165 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"efabc52d-6f3c-4442-9b80-09577d6d5ed7","Type":"ContainerStarted","Data":"2d6fd36cf5810909c88050cc20c15f847b1b0069bc0b2e13fc22cf63d5c5c033"} Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.052974 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" event={"ID":"81c5b019-830a-45a5-b05e-22f7aa7e41c7","Type":"ContainerDied","Data":"40635d0f4580a3bc434b2fd370ad4b541db323a92086fd21f44bf21127a2ea88"} Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.053022 4932 scope.go:117] "RemoveContainer" containerID="ada62a006618a52de9cbdc7e1191675216899513cfdb323ef65c636601133e63" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.053129 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.055499 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6644fc979c-bjpxl"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.059075 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"96fe12c6-435c-4ef9-a340-c15cd050d898","Type":"ContainerStarted","Data":"50a26765d82393ad4f763879251ca7f0c251c1c50f74af99544b44224a950233"} Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.064896 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pchf4" event={"ID":"c63ad2af-4b3b-4aa5-a300-06aadeef8149","Type":"ContainerStarted","Data":"cac15c7f14ab1bd067bcfa006814fe217ca7694f3478219da3114178dc6d8dae"} Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.064913 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.065050 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.065250 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.065281 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.065462 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d7ll\" (UniqueName: \"kubernetes.io/projected/4ef7f755-fa76-4e5c-8689-06727a6a9204-kube-api-access-6d7ll\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.065490 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.065639 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.065688 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.068452 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.071063 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.073442 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.074093 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.090654 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.090714 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.100406 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.105485 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerStarted","Data":"361657e74a3f41f1c11b35878117fcf352b08b255d1c2d6041c3ed746c1fd2c2"} Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.106551 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d7ll\" (UniqueName: \"kubernetes.io/projected/4ef7f755-fa76-4e5c-8689-06727a6a9204-kube-api-access-6d7ll\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.150436 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.166369 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-swift-storage-0\") pod \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.166409 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-nb\") pod \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.166449 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-sb\") pod \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.166589 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-config\") pod \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.166648 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kb4v\" (UniqueName: \"kubernetes.io/projected/81c5b019-830a-45a5-b05e-22f7aa7e41c7-kube-api-access-6kb4v\") pod \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.166715 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-svc\") pod \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.216085 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.251877 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81c5b019-830a-45a5-b05e-22f7aa7e41c7-kube-api-access-6kb4v" (OuterVolumeSpecName: "kube-api-access-6kb4v") pod "81c5b019-830a-45a5-b05e-22f7aa7e41c7" (UID: "81c5b019-830a-45a5-b05e-22f7aa7e41c7"). InnerVolumeSpecName "kube-api-access-6kb4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.270240 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kb4v\" (UniqueName: \"kubernetes.io/projected/81c5b019-830a-45a5-b05e-22f7aa7e41c7-kube-api-access-6kb4v\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.271382 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kfzmp"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.290962 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67874d8bd5-ff7xc"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.333156 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.354043 4932 scope.go:117] "RemoveContainer" containerID="dd9cde3170c69a353ec61b65c4f18cbd1534c0a768ecc45c08a9e745323cd132" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.512597 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nqxxn"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.690060 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-df7zx"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.698540 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b4cfbdb9c-hwmr5"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.848869 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cpzcj"] Feb 18 19:53:28 crc kubenswrapper[4932]: W0218 19:53:28.877439 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30efc86e_0c26_42e4_b907_1d4d985912ed.slice/crio-7657bac55ccacde4594557141b7b117e70c960cb0019ec4ad053450683538da6 WatchSource:0}: Error finding container 7657bac55ccacde4594557141b7b117e70c960cb0019ec4ad053450683538da6: Status 404 returned error can't find the container with id 7657bac55ccacde4594557141b7b117e70c960cb0019ec4ad053450683538da6 Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.897071 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81c5b019-830a-45a5-b05e-22f7aa7e41c7" (UID: "81c5b019-830a-45a5-b05e-22f7aa7e41c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.906590 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81c5b019-830a-45a5-b05e-22f7aa7e41c7" (UID: "81c5b019-830a-45a5-b05e-22f7aa7e41c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.929796 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81c5b019-830a-45a5-b05e-22f7aa7e41c7" (UID: "81c5b019-830a-45a5-b05e-22f7aa7e41c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.933079 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-config" (OuterVolumeSpecName: "config") pod "81c5b019-830a-45a5-b05e-22f7aa7e41c7" (UID: "81c5b019-830a-45a5-b05e-22f7aa7e41c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.938936 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "81c5b019-830a-45a5-b05e-22f7aa7e41c7" (UID: "81c5b019-830a-45a5-b05e-22f7aa7e41c7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.976564 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b855db8f7-mh8jh"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.989025 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.992594 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.992617 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.992627 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.992638 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.992646 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.064385 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8f475786f-6jkn9"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.071949 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8f475786f-6jkn9"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.079008 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.219924 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" path="/var/lib/kubelet/pods/81c5b019-830a-45a5-b05e-22f7aa7e41c7/volumes" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.220497 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kfzmp" event={"ID":"c4c20fc2-cf78-41c9-9e37-c5bea35d472f","Type":"ContainerStarted","Data":"d6c505a399db7407167ba85b30249143bd9bde443aac40b322a8f403af6c7869"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.220522 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.241986 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5bd90883-79db-4903-87ab-828b9608f9fa","Type":"ContainerStarted","Data":"df7e1feb306b3e43a9f10b16516d4c855aa78c2e70283552aa8d3546e3dee111"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.243529 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pchf4" event={"ID":"c63ad2af-4b3b-4aa5-a300-06aadeef8149","Type":"ContainerStarted","Data":"502e6556feede81a431352bd255101dc0919dfeb0d3696054c3aff0523a4cd61"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.246403 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.252914 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" event={"ID":"9d8f2367-684b-453b-bd7a-4d93e021885c","Type":"ContainerStarted","Data":"e47c7606f816972b032cc244cce055d96313af205e2299f5ab36bbb071939e87"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.285433 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4cfbdb9c-hwmr5" event={"ID":"4938c577-60aa-45c3-9190-b6e82bcf8b0d","Type":"ContainerStarted","Data":"449b65cc6eee0acc18bb77293bfac087ad9d12fb9f06318dfdbe198587c35eda"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.298773 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67874d8bd5-ff7xc" event={"ID":"a620c48b-58fa-487f-8997-e2784ddc497b","Type":"ContainerStarted","Data":"3db1ad470af452257972c4a5c8d1fb2ee8875e24f72fe068e89046c3a5a557ce"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.328280 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pchf4" podStartSLOduration=3.328256224 podStartE2EDuration="3.328256224s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:29.270995114 +0000 UTC m=+1172.852949959" watchObservedRunningTime="2026-02-18 19:53:29.328256224 +0000 UTC m=+1172.910211069" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.330419 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b4cfbdb9c-hwmr5"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.348304 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nqxxn" event={"ID":"3f831817-b833-4ee3-b1e9-77d9c02416ed","Type":"ContainerStarted","Data":"e5dc27f7492f1faa0455250ffd7868de8258df87b7d776e52911e76784a162ec"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.356598 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-df7zx" event={"ID":"30efc86e-0c26-42e4-b907-1d4d985912ed","Type":"ContainerStarted","Data":"7657bac55ccacde4594557141b7b117e70c960cb0019ec4ad053450683538da6"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.362481 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.373355 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-644d9bbcf7-chs9h"] Feb 18 19:53:29 crc kubenswrapper[4932]: E0218 19:53:29.374755 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerName="dnsmasq-dns" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.374780 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerName="dnsmasq-dns" Feb 18 19:53:29 crc kubenswrapper[4932]: E0218 19:53:29.374819 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerName="init" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.374828 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerName="init" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.375960 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerName="dnsmasq-dns" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.378366 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.398666 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.443698 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-644d9bbcf7-chs9h"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.487884 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.505418 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62ssm\" (UniqueName: \"kubernetes.io/projected/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-kube-api-access-62ssm\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.505518 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-logs\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.505549 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-config-data\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.505566 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-scripts\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.505603 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-horizon-secret-key\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.607426 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62ssm\" (UniqueName: \"kubernetes.io/projected/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-kube-api-access-62ssm\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.607537 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-logs\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.607566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-config-data\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.607584 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-scripts\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.607624 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-horizon-secret-key\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.611523 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-logs\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.614252 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-config-data\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.614746 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-scripts\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.615513 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-horizon-secret-key\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.643161 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62ssm\" (UniqueName: \"kubernetes.io/projected/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-kube-api-access-62ssm\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.742345 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:30 crc kubenswrapper[4932]: W0218 19:53:30.088340 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43f771cb_173f_4939_b1d1_e7d1b21834cb.slice/crio-5c69e97847efcde57a769daf96ea0750cda2a27a34d2c7d54166590315ebcbc1 WatchSource:0}: Error finding container 5c69e97847efcde57a769daf96ea0750cda2a27a34d2c7d54166590315ebcbc1: Status 404 returned error can't find the container with id 5c69e97847efcde57a769daf96ea0750cda2a27a34d2c7d54166590315ebcbc1 Feb 18 19:53:30 crc kubenswrapper[4932]: W0218 19:53:30.106716 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda956ae21_8721_4f0a_815f_acb82958ec28.slice/crio-4b701277c70d3fe2d0fb203a8c7965ad1fb9d840ac2b0ced500b37db0043c874 WatchSource:0}: Error finding container 4b701277c70d3fe2d0fb203a8c7965ad1fb9d840ac2b0ced500b37db0043c874: Status 404 returned error can't find the container with id 4b701277c70d3fe2d0fb203a8c7965ad1fb9d840ac2b0ced500b37db0043c874 Feb 18 19:53:30 crc kubenswrapper[4932]: W0218 19:53:30.128287 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod079e3d7d_bd4f_4198_8606_95192a514c07.slice/crio-fee032ffa8aa1dbfcab87d2f666d06dce9f00f11a46c1ed8dccaedd7a3ae0ea4 WatchSource:0}: Error finding container fee032ffa8aa1dbfcab87d2f666d06dce9f00f11a46c1ed8dccaedd7a3ae0ea4: Status 404 returned error can't find the container with id fee032ffa8aa1dbfcab87d2f666d06dce9f00f11a46c1ed8dccaedd7a3ae0ea4 Feb 18 19:53:30 crc kubenswrapper[4932]: I0218 19:53:30.370191 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerStarted","Data":"fee032ffa8aa1dbfcab87d2f666d06dce9f00f11a46c1ed8dccaedd7a3ae0ea4"} Feb 18 19:53:30 crc kubenswrapper[4932]: I0218 19:53:30.372563 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cpzcj" event={"ID":"43f771cb-173f-4939-b1d1-e7d1b21834cb","Type":"ContainerStarted","Data":"5c69e97847efcde57a769daf96ea0750cda2a27a34d2c7d54166590315ebcbc1"} Feb 18 19:53:30 crc kubenswrapper[4932]: I0218 19:53:30.374702 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a956ae21-8721-4f0a-815f-acb82958ec28","Type":"ContainerStarted","Data":"4b701277c70d3fe2d0fb203a8c7965ad1fb9d840ac2b0ced500b37db0043c874"} Feb 18 19:53:30 crc kubenswrapper[4932]: I0218 19:53:30.377037 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ef7f755-fa76-4e5c-8689-06727a6a9204","Type":"ContainerStarted","Data":"7726c4f68af3477b632315682a36e3711c9d3bff8965ae81fe2c0dd5455b7980"} Feb 18 19:53:30 crc kubenswrapper[4932]: I0218 19:53:30.380132 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" event={"ID":"0affb7f8-ebd4-4d8d-b41c-dd968316038d","Type":"ContainerStarted","Data":"58e783f05bfc925c4081556f019c7c54bdb33f3d7590e9cb651eb5ff2a823274"} Feb 18 19:53:31 crc kubenswrapper[4932]: I0218 19:53:31.392667 4932 generic.go:334] "Generic (PLEG): container finished" podID="9d8f2367-684b-453b-bd7a-4d93e021885c" containerID="73b2fccbbe9db45c39c7f1f9fbfc786fb36384c6ef170b3e0e90d0b3358912a1" exitCode=0 Feb 18 19:53:31 crc kubenswrapper[4932]: I0218 19:53:31.392917 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" event={"ID":"9d8f2367-684b-453b-bd7a-4d93e021885c","Type":"ContainerDied","Data":"73b2fccbbe9db45c39c7f1f9fbfc786fb36384c6ef170b3e0e90d0b3358912a1"} Feb 18 19:53:31 crc kubenswrapper[4932]: I0218 19:53:31.404916 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerStarted","Data":"d0b5bb5f9b3d94768e061de73d45369ab8df4d6880aaa6f295ec1ea349cbcc2b"} Feb 18 19:53:31 crc kubenswrapper[4932]: I0218 19:53:31.416266 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"efabc52d-6f3c-4442-9b80-09577d6d5ed7","Type":"ContainerStarted","Data":"21f540805a94ed439a7fc5568d03546bf5918b51410b050c0717633a77e5be9d"} Feb 18 19:53:31 crc kubenswrapper[4932]: I0218 19:53:31.418110 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kfzmp" event={"ID":"c4c20fc2-cf78-41c9-9e37-c5bea35d472f","Type":"ContainerStarted","Data":"682f69e31fcb10c9b585e4fbecb1e2d4f8e82e3ec0c03204e9e0fefc1d901753"} Feb 18 19:53:31 crc kubenswrapper[4932]: I0218 19:53:31.438728 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kfzmp" podStartSLOduration=5.438709782 podStartE2EDuration="5.438709782s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:31.433377741 +0000 UTC m=+1175.015332586" watchObservedRunningTime="2026-02-18 19:53:31.438709782 +0000 UTC m=+1175.020664627" Feb 18 19:53:31 crc kubenswrapper[4932]: I0218 19:53:31.951060 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.070607 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-swift-storage-0\") pod \"9d8f2367-684b-453b-bd7a-4d93e021885c\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.070724 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-svc\") pod \"9d8f2367-684b-453b-bd7a-4d93e021885c\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.070746 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-config\") pod \"9d8f2367-684b-453b-bd7a-4d93e021885c\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.070822 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-nb\") pod \"9d8f2367-684b-453b-bd7a-4d93e021885c\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.071082 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z8bq\" (UniqueName: \"kubernetes.io/projected/9d8f2367-684b-453b-bd7a-4d93e021885c-kube-api-access-8z8bq\") pod \"9d8f2367-684b-453b-bd7a-4d93e021885c\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.071129 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-sb\") pod \"9d8f2367-684b-453b-bd7a-4d93e021885c\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.109874 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8f2367-684b-453b-bd7a-4d93e021885c-kube-api-access-8z8bq" (OuterVolumeSpecName: "kube-api-access-8z8bq") pod "9d8f2367-684b-453b-bd7a-4d93e021885c" (UID: "9d8f2367-684b-453b-bd7a-4d93e021885c"). InnerVolumeSpecName "kube-api-access-8z8bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.127895 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d8f2367-684b-453b-bd7a-4d93e021885c" (UID: "9d8f2367-684b-453b-bd7a-4d93e021885c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.143525 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d8f2367-684b-453b-bd7a-4d93e021885c" (UID: "9d8f2367-684b-453b-bd7a-4d93e021885c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.145961 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d8f2367-684b-453b-bd7a-4d93e021885c" (UID: "9d8f2367-684b-453b-bd7a-4d93e021885c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.146998 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9d8f2367-684b-453b-bd7a-4d93e021885c" (UID: "9d8f2367-684b-453b-bd7a-4d93e021885c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.151527 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-config" (OuterVolumeSpecName: "config") pod "9d8f2367-684b-453b-bd7a-4d93e021885c" (UID: "9d8f2367-684b-453b-bd7a-4d93e021885c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.190436 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z8bq\" (UniqueName: \"kubernetes.io/projected/9d8f2367-684b-453b-bd7a-4d93e021885c-kube-api-access-8z8bq\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.190467 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.190476 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.190487 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.190496 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.190505 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.432833 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" event={"ID":"9d8f2367-684b-453b-bd7a-4d93e021885c","Type":"ContainerDied","Data":"e47c7606f816972b032cc244cce055d96313af205e2299f5ab36bbb071939e87"} Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.432884 4932 scope.go:117] "RemoveContainer" containerID="73b2fccbbe9db45c39c7f1f9fbfc786fb36384c6ef170b3e0e90d0b3358912a1" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.432844 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.475082 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-644d9bbcf7-chs9h"] Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.496225 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6644fc979c-bjpxl"] Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.503507 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6644fc979c-bjpxl"] Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.818299 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Feb 18 19:53:33 crc kubenswrapper[4932]: I0218 19:53:33.194853 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d8f2367-684b-453b-bd7a-4d93e021885c" path="/var/lib/kubelet/pods/9d8f2367-684b-453b-bd7a-4d93e021885c/volumes" Feb 18 19:53:35 crc kubenswrapper[4932]: I0218 19:53:35.463308 4932 generic.go:334] "Generic (PLEG): container finished" podID="c63ad2af-4b3b-4aa5-a300-06aadeef8149" containerID="502e6556feede81a431352bd255101dc0919dfeb0d3696054c3aff0523a4cd61" exitCode=0 Feb 18 19:53:35 crc kubenswrapper[4932]: I0218 19:53:35.463405 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pchf4" event={"ID":"c63ad2af-4b3b-4aa5-a300-06aadeef8149","Type":"ContainerDied","Data":"502e6556feede81a431352bd255101dc0919dfeb0d3696054c3aff0523a4cd61"} Feb 18 19:53:35 crc kubenswrapper[4932]: I0218 19:53:35.465388 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-644d9bbcf7-chs9h" event={"ID":"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf","Type":"ContainerStarted","Data":"64958cb64aa641fc969187f742c63571ece0fcc99f90f916c984ba259dcd59e7"} Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.366098 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67874d8bd5-ff7xc"] Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.414063 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75df984768-5mv9k"] Feb 18 19:53:36 crc kubenswrapper[4932]: E0218 19:53:36.414567 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8f2367-684b-453b-bd7a-4d93e021885c" containerName="init" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.414580 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8f2367-684b-453b-bd7a-4d93e021885c" containerName="init" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.414765 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d8f2367-684b-453b-bd7a-4d93e021885c" containerName="init" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.417302 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.421538 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.430380 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75df984768-5mv9k"] Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.467741 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-644d9bbcf7-chs9h"] Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.486842 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6877c868f8-jvwwn"] Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.488328 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.521909 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6877c868f8-jvwwn"] Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589546 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-scripts\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589592 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-horizon-tls-certs\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589614 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec0e208-2bfc-4661-8395-c56418bb9307-logs\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589651 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-logs\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589849 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-config-data\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589890 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-tls-certs\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589955 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-config-data\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589972 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-scripts\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.590113 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v96cw\" (UniqueName: \"kubernetes.io/projected/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-kube-api-access-v96cw\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.590198 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-combined-ca-bundle\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.590235 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-secret-key\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.590254 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-combined-ca-bundle\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.590276 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-horizon-secret-key\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.590292 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h566q\" (UniqueName: \"kubernetes.io/projected/dec0e208-2bfc-4661-8395-c56418bb9307-kube-api-access-h566q\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.692741 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-scripts\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.692807 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-horizon-tls-certs\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.692829 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec0e208-2bfc-4661-8395-c56418bb9307-logs\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693021 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-logs\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693377 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-logs\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693468 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-config-data\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693489 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-tls-certs\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693755 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-config-data\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693770 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-scripts\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693801 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v96cw\" (UniqueName: \"kubernetes.io/projected/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-kube-api-access-v96cw\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693830 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-combined-ca-bundle\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.696261 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-secret-key\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.696296 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-combined-ca-bundle\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.696332 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h566q\" (UniqueName: \"kubernetes.io/projected/dec0e208-2bfc-4661-8395-c56418bb9307-kube-api-access-h566q\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.696348 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-horizon-secret-key\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.695712 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-config-data\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.694289 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec0e208-2bfc-4661-8395-c56418bb9307-logs\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.694703 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-config-data\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.695091 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-scripts\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.694418 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-scripts\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.701104 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-horizon-secret-key\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.702404 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-secret-key\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.702503 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-tls-certs\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.705453 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-combined-ca-bundle\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.708887 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-combined-ca-bundle\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.710757 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v96cw\" (UniqueName: \"kubernetes.io/projected/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-kube-api-access-v96cw\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.721772 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h566q\" (UniqueName: \"kubernetes.io/projected/dec0e208-2bfc-4661-8395-c56418bb9307-kube-api-access-h566q\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.736004 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-horizon-tls-certs\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.758252 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.809888 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:37 crc kubenswrapper[4932]: I0218 19:53:37.307500 4932 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podf7988cea-6aa8-4552-8965-04b417c91831"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podf7988cea-6aa8-4552-8965-04b417c91831] : Timed out while waiting for systemd to remove kubepods-besteffort-podf7988cea_6aa8_4552_8965_04b417c91831.slice" Feb 18 19:53:37 crc kubenswrapper[4932]: E0218 19:53:37.307750 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podf7988cea-6aa8-4552-8965-04b417c91831] : unable to destroy cgroup paths for cgroup [kubepods besteffort podf7988cea-6aa8-4552-8965-04b417c91831] : Timed out while waiting for systemd to remove kubepods-besteffort-podf7988cea_6aa8_4552_8965_04b417c91831.slice" pod="openstack/cinder-db-create-hn6qq" podUID="f7988cea-6aa8-4552-8965-04b417c91831" Feb 18 19:53:37 crc kubenswrapper[4932]: I0218 19:53:37.507532 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:44 crc kubenswrapper[4932]: I0218 19:53:44.572513 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ef7f755-fa76-4e5c-8689-06727a6a9204","Type":"ContainerStarted","Data":"e42b6cf62ee0a84f0660d6bd0e0803f31a5ed60ee2064a6fb1ff3db60b38d545"} Feb 18 19:53:44 crc kubenswrapper[4932]: I0218 19:53:44.575354 4932 generic.go:334] "Generic (PLEG): container finished" podID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerID="6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4" exitCode=0 Feb 18 19:53:44 crc kubenswrapper[4932]: I0218 19:53:44.575414 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" event={"ID":"0affb7f8-ebd4-4d8d-b41c-dd968316038d","Type":"ContainerDied","Data":"6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4"} Feb 18 19:53:44 crc kubenswrapper[4932]: I0218 19:53:44.579275 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerStarted","Data":"87593181676f68ce6f705683e7d0d7ac8f773d82d9f3858c223d1a3115fbc1c5"} Feb 18 19:53:44 crc kubenswrapper[4932]: I0218 19:53:44.636407 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=33.636388244 podStartE2EDuration="33.636388244s" podCreationTimestamp="2026-02-18 19:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:44.631684008 +0000 UTC m=+1188.213638863" watchObservedRunningTime="2026-02-18 19:53:44.636388244 +0000 UTC m=+1188.218343099" Feb 18 19:53:47 crc kubenswrapper[4932]: I0218 19:53:47.141576 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.166896 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.211666 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-config-data\") pod \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.211717 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-scripts\") pod \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.211743 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9ghl\" (UniqueName: \"kubernetes.io/projected/c63ad2af-4b3b-4aa5-a300-06aadeef8149-kube-api-access-c9ghl\") pod \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.211786 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-fernet-keys\") pod \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.211923 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-combined-ca-bundle\") pod \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.212047 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-credential-keys\") pod \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.216735 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63ad2af-4b3b-4aa5-a300-06aadeef8149-kube-api-access-c9ghl" (OuterVolumeSpecName: "kube-api-access-c9ghl") pod "c63ad2af-4b3b-4aa5-a300-06aadeef8149" (UID: "c63ad2af-4b3b-4aa5-a300-06aadeef8149"). InnerVolumeSpecName "kube-api-access-c9ghl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.222291 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c63ad2af-4b3b-4aa5-a300-06aadeef8149" (UID: "c63ad2af-4b3b-4aa5-a300-06aadeef8149"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.222383 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c63ad2af-4b3b-4aa5-a300-06aadeef8149" (UID: "c63ad2af-4b3b-4aa5-a300-06aadeef8149"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.222409 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-scripts" (OuterVolumeSpecName: "scripts") pod "c63ad2af-4b3b-4aa5-a300-06aadeef8149" (UID: "c63ad2af-4b3b-4aa5-a300-06aadeef8149"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.234228 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c63ad2af-4b3b-4aa5-a300-06aadeef8149" (UID: "c63ad2af-4b3b-4aa5-a300-06aadeef8149"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.240371 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-config-data" (OuterVolumeSpecName: "config-data") pod "c63ad2af-4b3b-4aa5-a300-06aadeef8149" (UID: "c63ad2af-4b3b-4aa5-a300-06aadeef8149"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.314449 4932 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.314490 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.314502 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9ghl\" (UniqueName: \"kubernetes.io/projected/c63ad2af-4b3b-4aa5-a300-06aadeef8149-kube-api-access-c9ghl\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.314517 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.314528 4932 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.314569 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:52 crc kubenswrapper[4932]: E0218 19:53:52.462502 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Feb 18 19:53:52 crc kubenswrapper[4932]: E0218 19:53:52.463129 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Feb 18 19:53:52 crc kubenswrapper[4932]: E0218 19:53:52.463522 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.58:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n649h66fh65ch5cdh588h75h5fch669hffh577h56ch57dh55h5f9h5ddhb6h577h555h88h594h58fh598h87h564h64dh5c8h5c4h55fh64h557h54h575q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgvx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(079e3d7d-bd4f-4198-8606-95192a514c07): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.651133 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pchf4" event={"ID":"c63ad2af-4b3b-4aa5-a300-06aadeef8149","Type":"ContainerDied","Data":"cac15c7f14ab1bd067bcfa006814fe217ca7694f3478219da3114178dc6d8dae"} Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.651196 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cac15c7f14ab1bd067bcfa006814fe217ca7694f3478219da3114178dc6d8dae" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.651239 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:53 crc kubenswrapper[4932]: E0218 19:53:53.108033 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 18 19:53:53 crc kubenswrapper[4932]: E0218 19:53:53.108074 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 18 19:53:53 crc kubenswrapper[4932]: E0218 19:53:53.108182 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.58:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zn59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-cpzcj_openstack(43f771cb-173f-4939-b1d1-e7d1b21834cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:53 crc kubenswrapper[4932]: E0218 19:53:53.109429 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-cpzcj" podUID="43f771cb-173f-4939-b1d1-e7d1b21834cb" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.259854 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pchf4"] Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.270694 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pchf4"] Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.366468 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vldrp"] Feb 18 19:53:53 crc kubenswrapper[4932]: E0218 19:53:53.366988 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63ad2af-4b3b-4aa5-a300-06aadeef8149" containerName="keystone-bootstrap" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.367009 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63ad2af-4b3b-4aa5-a300-06aadeef8149" containerName="keystone-bootstrap" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.367160 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63ad2af-4b3b-4aa5-a300-06aadeef8149" containerName="keystone-bootstrap" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.367776 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.375875 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sk7x7" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.376484 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.377066 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.378016 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.378433 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.410232 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vldrp"] Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.437559 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-combined-ca-bundle\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.437634 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k42qv\" (UniqueName: \"kubernetes.io/projected/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-kube-api-access-k42qv\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.437784 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-config-data\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.437885 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-scripts\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.437939 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-fernet-keys\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.437967 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-credential-keys\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.539563 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-scripts\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.539637 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-fernet-keys\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.539662 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-credential-keys\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.539699 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-combined-ca-bundle\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.539722 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k42qv\" (UniqueName: \"kubernetes.io/projected/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-kube-api-access-k42qv\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.539768 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-config-data\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.547742 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-fernet-keys\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.548069 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-credential-keys\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.548742 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-scripts\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.549754 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-config-data\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.552639 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-combined-ca-bundle\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.556895 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k42qv\" (UniqueName: \"kubernetes.io/projected/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-kube-api-access-k42qv\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.668370 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a956ae21-8721-4f0a-815f-acb82958ec28","Type":"ContainerStarted","Data":"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f"} Feb 18 19:53:53 crc kubenswrapper[4932]: E0218 19:53:53.671125 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.58:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-cpzcj" podUID="43f771cb-173f-4939-b1d1-e7d1b21834cb" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.707312 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:54 crc kubenswrapper[4932]: E0218 19:53:54.511614 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 18 19:53:54 crc kubenswrapper[4932]: E0218 19:53:54.511937 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 18 19:53:54 crc kubenswrapper[4932]: E0218 19:53:54.512079 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.58:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qqllv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-nqxxn_openstack(3f831817-b833-4ee3-b1e9-77d9c02416ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:54 crc kubenswrapper[4932]: E0218 19:53:54.513308 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-nqxxn" podUID="3f831817-b833-4ee3-b1e9-77d9c02416ed" Feb 18 19:53:54 crc kubenswrapper[4932]: E0218 19:53:54.676792 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.58:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-nqxxn" podUID="3f831817-b833-4ee3-b1e9-77d9c02416ed" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.013900 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6877c868f8-jvwwn"] Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.033734 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75df984768-5mv9k"] Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.165622 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vldrp"] Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.190411 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63ad2af-4b3b-4aa5-a300-06aadeef8149" path="/var/lib/kubelet/pods/c63ad2af-4b3b-4aa5-a300-06aadeef8149/volumes" Feb 18 19:53:55 crc kubenswrapper[4932]: W0218 19:53:55.435514 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddec0e208_2bfc_4661_8395_c56418bb9307.slice/crio-0a23db9200dc7e24b7810e1e26b3a65a213a638cce894066f30cf730bad21368 WatchSource:0}: Error finding container 0a23db9200dc7e24b7810e1e26b3a65a213a638cce894066f30cf730bad21368: Status 404 returned error can't find the container with id 0a23db9200dc7e24b7810e1e26b3a65a213a638cce894066f30cf730bad21368 Feb 18 19:53:55 crc kubenswrapper[4932]: W0218 19:53:55.437205 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90dd0ecb_25a6_463a_a0d8_187c5c5478c5.slice/crio-2ab6f11029940ee49ae4bb7e60b6c065c2cfba299deef17e59e03c9b54af4829 WatchSource:0}: Error finding container 2ab6f11029940ee49ae4bb7e60b6c065c2cfba299deef17e59e03c9b54af4829: Status 404 returned error can't find the container with id 2ab6f11029940ee49ae4bb7e60b6c065c2cfba299deef17e59e03c9b54af4829 Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.706786 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"96fe12c6-435c-4ef9-a340-c15cd050d898","Type":"ContainerStarted","Data":"dbdae9f53819c07d29e95823430d3cc7a7fe94e92688f6b0895ae6c060733453"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.711429 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5bd90883-79db-4903-87ab-828b9608f9fa","Type":"ContainerStarted","Data":"fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.713052 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vldrp" event={"ID":"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd","Type":"ContainerStarted","Data":"8e36575f312ac74d40b63b16208afa722288494a47294670fd9808ea408dc232"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.717918 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6877c868f8-jvwwn" event={"ID":"90dd0ecb-25a6-463a-a0d8-187c5c5478c5","Type":"ContainerStarted","Data":"2ab6f11029940ee49ae4bb7e60b6c065c2cfba299deef17e59e03c9b54af4829"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.721169 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"efabc52d-6f3c-4442-9b80-09577d6d5ed7","Type":"ContainerStarted","Data":"3bb5786715d4653ff11b29e662c3a16b899ce26d5a3ffbce47843577ab6828a2"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.721348 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api-log" containerID="cri-o://21f540805a94ed439a7fc5568d03546bf5918b51410b050c0717633a77e5be9d" gracePeriod=30 Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.721742 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api" containerID="cri-o://3bb5786715d4653ff11b29e662c3a16b899ce26d5a3ffbce47843577ab6828a2" gracePeriod=30 Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.721977 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.768413 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=25.597982618 podStartE2EDuration="29.768391951s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="2026-02-18 19:53:27.84827548 +0000 UTC m=+1171.430230315" lastFinishedPulling="2026-02-18 19:53:32.018684803 +0000 UTC m=+1175.600639648" observedRunningTime="2026-02-18 19:53:55.725692079 +0000 UTC m=+1199.307646924" watchObservedRunningTime="2026-02-18 19:53:55.768391951 +0000 UTC m=+1199.350346796" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.776616 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.150:9322/\": EOF" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.814106 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=29.814088896 podStartE2EDuration="29.814088896s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:55.806602532 +0000 UTC m=+1199.388557377" watchObservedRunningTime="2026-02-18 19:53:55.814088896 +0000 UTC m=+1199.396043741" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.826671 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-log" containerID="cri-o://e42b6cf62ee0a84f0660d6bd0e0803f31a5ed60ee2064a6fb1ff3db60b38d545" gracePeriod=30 Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.826837 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ef7f755-fa76-4e5c-8689-06727a6a9204","Type":"ContainerStarted","Data":"3485f1bd76a9fbf8fa572bdcacbfb0c9029328eeea0173e700694eb380d91d42"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.826889 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-httpd" containerID="cri-o://3485f1bd76a9fbf8fa572bdcacbfb0c9029328eeea0173e700694eb380d91d42" gracePeriod=30 Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.851039 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75df984768-5mv9k" event={"ID":"dec0e208-2bfc-4661-8395-c56418bb9307","Type":"ContainerStarted","Data":"0a23db9200dc7e24b7810e1e26b3a65a213a638cce894066f30cf730bad21368"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.875418 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=6.186353563 podStartE2EDuration="29.875400816s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="2026-02-18 19:53:28.324854796 +0000 UTC m=+1171.906809651" lastFinishedPulling="2026-02-18 19:53:52.013902029 +0000 UTC m=+1195.595856904" observedRunningTime="2026-02-18 19:53:55.840451505 +0000 UTC m=+1199.422406350" watchObservedRunningTime="2026-02-18 19:53:55.875400816 +0000 UTC m=+1199.457355661" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.889515 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" event={"ID":"0affb7f8-ebd4-4d8d-b41c-dd968316038d","Type":"ContainerStarted","Data":"2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.890491 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.905148 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=29.905129918 podStartE2EDuration="29.905129918s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:55.890803905 +0000 UTC m=+1199.472758750" watchObservedRunningTime="2026-02-18 19:53:55.905129918 +0000 UTC m=+1199.487084763" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.927006 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-df7zx" event={"ID":"30efc86e-0c26-42e4-b907-1d4d985912ed","Type":"ContainerStarted","Data":"d60abba7265ba14494902810d1153e145d30148ef253f739d8bb7a9a9675f1f8"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.935738 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4cfbdb9c-hwmr5" event={"ID":"4938c577-60aa-45c3-9190-b6e82bcf8b0d","Type":"ContainerStarted","Data":"c85e580ee020727173d28e445621bbce2289b58bcee15597e5fb5350c78183fd"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.938633 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" podStartSLOduration=29.938611763 podStartE2EDuration="29.938611763s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:55.923678595 +0000 UTC m=+1199.505633440" watchObservedRunningTime="2026-02-18 19:53:55.938611763 +0000 UTC m=+1199.520566608" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.949730 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a956ae21-8721-4f0a-815f-acb82958ec28","Type":"ContainerStarted","Data":"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.949884 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-log" containerID="cri-o://52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f" gracePeriod=30 Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.950631 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-httpd" containerID="cri-o://68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41" gracePeriod=30 Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.954090 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-df7zx" podStartSLOduration=4.443230228 podStartE2EDuration="29.953994941s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="2026-02-18 19:53:28.883897012 +0000 UTC m=+1172.465851847" lastFinishedPulling="2026-02-18 19:53:54.394661715 +0000 UTC m=+1197.976616560" observedRunningTime="2026-02-18 19:53:55.946795224 +0000 UTC m=+1199.528750069" watchObservedRunningTime="2026-02-18 19:53:55.953994941 +0000 UTC m=+1199.535949786" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.976886 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=28.976862654 podStartE2EDuration="28.976862654s" podCreationTimestamp="2026-02-18 19:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:55.970448326 +0000 UTC m=+1199.552403171" watchObservedRunningTime="2026-02-18 19:53:55.976862654 +0000 UTC m=+1199.558817499" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.702838 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.718200 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.718249 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.772307 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830378 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-httpd-run\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830427 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z4dm\" (UniqueName: \"kubernetes.io/projected/a956ae21-8721-4f0a-815f-acb82958ec28-kube-api-access-6z4dm\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830493 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-combined-ca-bundle\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830578 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-scripts\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830602 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-config-data\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830624 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-logs\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830681 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-public-tls-certs\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830720 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830841 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.831027 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-logs" (OuterVolumeSpecName: "logs") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.831152 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.831195 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.838670 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.839011 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-scripts" (OuterVolumeSpecName: "scripts") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.839121 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a956ae21-8721-4f0a-815f-acb82958ec28-kube-api-access-6z4dm" (OuterVolumeSpecName: "kube-api-access-6z4dm") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "kube-api-access-6z4dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.863918 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.884992 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.898396 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-config-data" (OuterVolumeSpecName: "config-data") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.932815 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.932857 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.932873 4932 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.932916 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.932933 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z4dm\" (UniqueName: \"kubernetes.io/projected/a956ae21-8721-4f0a-815f-acb82958ec28-kube-api-access-6z4dm\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.932946 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.960431 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.965516 4932 generic.go:334] "Generic (PLEG): container finished" podID="a956ae21-8721-4f0a-815f-acb82958ec28" containerID="68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41" exitCode=0 Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.965551 4932 generic.go:334] "Generic (PLEG): container finished" podID="a956ae21-8721-4f0a-815f-acb82958ec28" containerID="52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f" exitCode=143 Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.965598 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a956ae21-8721-4f0a-815f-acb82958ec28","Type":"ContainerDied","Data":"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41"} Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.965628 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a956ae21-8721-4f0a-815f-acb82958ec28","Type":"ContainerDied","Data":"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f"} Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.965644 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a956ae21-8721-4f0a-815f-acb82958ec28","Type":"ContainerDied","Data":"4b701277c70d3fe2d0fb203a8c7965ad1fb9d840ac2b0ced500b37db0043c874"} Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.965665 4932 scope.go:117] "RemoveContainer" containerID="68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.965794 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.974908 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67874d8bd5-ff7xc" event={"ID":"a620c48b-58fa-487f-8997-e2784ddc497b","Type":"ContainerStarted","Data":"e80cdd4378af4ac5d4d707a290fa639025fc55be34fd9af1c68a0bd06a7b10c3"} Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.974980 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67874d8bd5-ff7xc" event={"ID":"a620c48b-58fa-487f-8997-e2784ddc497b","Type":"ContainerStarted","Data":"97ecd324c61720be922083172bc1852b964c2ee86274e593e6ab59deb4006699"} Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.975194 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67874d8bd5-ff7xc" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon-log" containerID="cri-o://97ecd324c61720be922083172bc1852b964c2ee86274e593e6ab59deb4006699" gracePeriod=30 Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.975464 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67874d8bd5-ff7xc" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon" containerID="cri-o://e80cdd4378af4ac5d4d707a290fa639025fc55be34fd9af1c68a0bd06a7b10c3" gracePeriod=30 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.001501 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6877c868f8-jvwwn" event={"ID":"90dd0ecb-25a6-463a-a0d8-187c5c5478c5","Type":"ContainerStarted","Data":"ab0a233e7d39fe12bab8290499fd31156075ffd8db7b042097ce10342aa59916"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.003138 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6877c868f8-jvwwn" event={"ID":"90dd0ecb-25a6-463a-a0d8-187c5c5478c5","Type":"ContainerStarted","Data":"9682694caadc3019ce7876466d818429a2d77007240a66f539827566ad570483"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.004520 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-67874d8bd5-ff7xc" podStartSLOduration=4.810732069 podStartE2EDuration="31.00450299s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="2026-02-18 19:53:28.392964783 +0000 UTC m=+1171.974919628" lastFinishedPulling="2026-02-18 19:53:54.586735704 +0000 UTC m=+1198.168690549" observedRunningTime="2026-02-18 19:53:57.002409688 +0000 UTC m=+1200.584364543" watchObservedRunningTime="2026-02-18 19:53:57.00450299 +0000 UTC m=+1200.586457835" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.010157 4932 generic.go:334] "Generic (PLEG): container finished" podID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerID="21f540805a94ed439a7fc5568d03546bf5918b51410b050c0717633a77e5be9d" exitCode=143 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.010282 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"efabc52d-6f3c-4442-9b80-09577d6d5ed7","Type":"ContainerDied","Data":"21f540805a94ed439a7fc5568d03546bf5918b51410b050c0717633a77e5be9d"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.012294 4932 generic.go:334] "Generic (PLEG): container finished" podID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerID="3485f1bd76a9fbf8fa572bdcacbfb0c9029328eeea0173e700694eb380d91d42" exitCode=0 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.012317 4932 generic.go:334] "Generic (PLEG): container finished" podID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerID="e42b6cf62ee0a84f0660d6bd0e0803f31a5ed60ee2064a6fb1ff3db60b38d545" exitCode=143 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.012394 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ef7f755-fa76-4e5c-8689-06727a6a9204","Type":"ContainerDied","Data":"3485f1bd76a9fbf8fa572bdcacbfb0c9029328eeea0173e700694eb380d91d42"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.012455 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ef7f755-fa76-4e5c-8689-06727a6a9204","Type":"ContainerDied","Data":"e42b6cf62ee0a84f0660d6bd0e0803f31a5ed60ee2064a6fb1ff3db60b38d545"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.017204 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75df984768-5mv9k" event={"ID":"dec0e208-2bfc-4661-8395-c56418bb9307","Type":"ContainerStarted","Data":"c14c2db9c2e97146ded5c1be64f375a20e4d3dc8027f2eb556b8226700b572e9"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.017228 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75df984768-5mv9k" event={"ID":"dec0e208-2bfc-4661-8395-c56418bb9307","Type":"ContainerStarted","Data":"8938c10b66b4f6d7e20437bee59ce3c16a7181c0a809f3e865b01b219862d8d7"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.027564 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6877c868f8-jvwwn" podStartSLOduration=21.027546067 podStartE2EDuration="21.027546067s" podCreationTimestamp="2026-02-18 19:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:57.019646303 +0000 UTC m=+1200.601601168" watchObservedRunningTime="2026-02-18 19:53:57.027546067 +0000 UTC m=+1200.609500912" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.034400 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.043298 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerStarted","Data":"c7ee5732776c18a927c72c5ff1cc708a0c4c7cbb7be39c25d6f15f19eb006153"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.056348 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4cfbdb9c-hwmr5" event={"ID":"4938c577-60aa-45c3-9190-b6e82bcf8b0d","Type":"ContainerStarted","Data":"f5769d60f6e01bf4316e0a1d5902b22aaf988b784a78ee3cc62feeec1f37553a"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.056506 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b4cfbdb9c-hwmr5" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon-log" containerID="cri-o://c85e580ee020727173d28e445621bbce2289b58bcee15597e5fb5350c78183fd" gracePeriod=30 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.056698 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b4cfbdb9c-hwmr5" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon" containerID="cri-o://f5769d60f6e01bf4316e0a1d5902b22aaf988b784a78ee3cc62feeec1f37553a" gracePeriod=30 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.068221 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vldrp" event={"ID":"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd","Type":"ContainerStarted","Data":"e02396c72df7f91c2b9a6adb3ff52d02133d145e009ed0755b0356a1da74ee73"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.081220 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-644d9bbcf7-chs9h" event={"ID":"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf","Type":"ContainerStarted","Data":"a824fe0a64ae9746970f5bc8a389ffaa0e7b9eacf3d8dea3f2ebb12195def55c"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.081292 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-644d9bbcf7-chs9h" event={"ID":"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf","Type":"ContainerStarted","Data":"80df06be1d2a603214b5aa7b38525d904a38b1052555a7f95c74bc71722c9961"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.081389 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-644d9bbcf7-chs9h" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon-log" containerID="cri-o://80df06be1d2a603214b5aa7b38525d904a38b1052555a7f95c74bc71722c9961" gracePeriod=30 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.081662 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-644d9bbcf7-chs9h" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon" containerID="cri-o://a824fe0a64ae9746970f5bc8a389ffaa0e7b9eacf3d8dea3f2ebb12195def55c" gracePeriod=30 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.082096 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-75df984768-5mv9k" podStartSLOduration=21.082075969999998 podStartE2EDuration="21.08207597s" podCreationTimestamp="2026-02-18 19:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:57.044976406 +0000 UTC m=+1200.626931251" watchObservedRunningTime="2026-02-18 19:53:57.08207597 +0000 UTC m=+1200.664030815" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.082366 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.107003 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.127237 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.145033 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.146661 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.155454 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:57 crc kubenswrapper[4932]: E0218 19:53:57.155989 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-httpd" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.156007 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-httpd" Feb 18 19:53:57 crc kubenswrapper[4932]: E0218 19:53:57.156021 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-log" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.156028 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-log" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.156315 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-httpd" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.156342 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-log" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.168885 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.198426 4932 scope.go:117] "RemoveContainer" containerID="52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.199562 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.206763 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.210733 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b4cfbdb9c-hwmr5" podStartSLOduration=5.652345942 podStartE2EDuration="31.210709067s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="2026-02-18 19:53:28.873631559 +0000 UTC m=+1172.455586404" lastFinishedPulling="2026-02-18 19:53:54.431994674 +0000 UTC m=+1198.013949529" observedRunningTime="2026-02-18 19:53:57.096366852 +0000 UTC m=+1200.678321697" watchObservedRunningTime="2026-02-18 19:53:57.210709067 +0000 UTC m=+1200.792663912" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.266704 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" path="/var/lib/kubelet/pods/a956ae21-8721-4f0a-815f-acb82958ec28/volumes" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.267583 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.278483 4932 scope.go:117] "RemoveContainer" containerID="68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41" Feb 18 19:53:57 crc kubenswrapper[4932]: E0218 19:53:57.285482 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41\": container with ID starting with 68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41 not found: ID does not exist" containerID="68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.285526 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41"} err="failed to get container status \"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41\": rpc error: code = NotFound desc = could not find container \"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41\": container with ID starting with 68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41 not found: ID does not exist" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.285551 4932 scope.go:117] "RemoveContainer" containerID="52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.289394 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.291425 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vldrp" podStartSLOduration=4.291405574 podStartE2EDuration="4.291405574s" podCreationTimestamp="2026-02-18 19:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:57.120594548 +0000 UTC m=+1200.702549393" watchObservedRunningTime="2026-02-18 19:53:57.291405574 +0000 UTC m=+1200.873360429" Feb 18 19:53:57 crc kubenswrapper[4932]: E0218 19:53:57.300511 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f\": container with ID starting with 52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f not found: ID does not exist" containerID="52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.300588 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f"} err="failed to get container status \"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f\": rpc error: code = NotFound desc = could not find container \"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f\": container with ID starting with 52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f not found: ID does not exist" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.300615 4932 scope.go:117] "RemoveContainer" containerID="68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.301361 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41"} err="failed to get container status \"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41\": rpc error: code = NotFound desc = could not find container \"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41\": container with ID starting with 68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41 not found: ID does not exist" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.301378 4932 scope.go:117] "RemoveContainer" containerID="52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.303092 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f"} err="failed to get container status \"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f\": rpc error: code = NotFound desc = could not find container \"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f\": container with ID starting with 52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f not found: ID does not exist" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.315701 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-644d9bbcf7-chs9h" podStartSLOduration=7.698692835 podStartE2EDuration="28.315684232s" podCreationTimestamp="2026-02-18 19:53:29 +0000 UTC" firstStartedPulling="2026-02-18 19:53:34.827381645 +0000 UTC m=+1178.409336490" lastFinishedPulling="2026-02-18 19:53:55.444373042 +0000 UTC m=+1199.026327887" observedRunningTime="2026-02-18 19:53:57.158589664 +0000 UTC m=+1200.740544509" watchObservedRunningTime="2026-02-18 19:53:57.315684232 +0000 UTC m=+1200.897639077" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.351450 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.351481 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bxxv\" (UniqueName: \"kubernetes.io/projected/67750e31-ed62-4908-9b56-3a46be936224-kube-api-access-2bxxv\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.351575 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.351625 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-scripts\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.351655 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-config-data\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.353372 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-logs\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.353440 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.353487 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.372039 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.450429 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.450472 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456271 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-config-data\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456339 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-logs\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456370 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456398 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456475 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456492 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bxxv\" (UniqueName: \"kubernetes.io/projected/67750e31-ed62-4908-9b56-3a46be936224-kube-api-access-2bxxv\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456548 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456587 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-scripts\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456964 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-logs\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.458369 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.458945 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.467286 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-config-data\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.468395 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.471625 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-scripts\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.475429 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.481469 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.485859 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bxxv\" (UniqueName: \"kubernetes.io/projected/67750e31-ed62-4908-9b56-3a46be936224-kube-api-access-2bxxv\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.510513 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.522748 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.546255 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.666513 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.759822 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-config-data\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.759892 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d7ll\" (UniqueName: \"kubernetes.io/projected/4ef7f755-fa76-4e5c-8689-06727a6a9204-kube-api-access-6d7ll\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.759940 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-httpd-run\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.760049 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-logs\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.760078 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-internal-tls-certs\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.760145 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-scripts\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.760194 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.760302 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-combined-ca-bundle\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.763518 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.763543 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-logs" (OuterVolumeSpecName: "logs") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.770397 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef7f755-fa76-4e5c-8689-06727a6a9204-kube-api-access-6d7ll" (OuterVolumeSpecName: "kube-api-access-6d7ll") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "kube-api-access-6d7ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.779579 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-scripts" (OuterVolumeSpecName: "scripts") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.783202 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.833544 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.862506 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.862580 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.862595 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.862606 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d7ll\" (UniqueName: \"kubernetes.io/projected/4ef7f755-fa76-4e5c-8689-06727a6a9204-kube-api-access-6d7ll\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.862620 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.862632 4932 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.865409 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-config-data" (OuterVolumeSpecName: "config-data") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.865461 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.876960 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.908243 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.970622 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.970661 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.970670 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.094393 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.095734 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ef7f755-fa76-4e5c-8689-06727a6a9204","Type":"ContainerDied","Data":"7726c4f68af3477b632315682a36e3711c9d3bff8965ae81fe2c0dd5455b7980"} Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.095776 4932 scope.go:117] "RemoveContainer" containerID="3485f1bd76a9fbf8fa572bdcacbfb0c9029328eeea0173e700694eb380d91d42" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.105244 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.188948 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.216110 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.241829 4932 scope.go:117] "RemoveContainer" containerID="e42b6cf62ee0a84f0660d6bd0e0803f31a5ed60ee2064a6fb1ff3db60b38d545" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.246885 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.306417 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:58 crc kubenswrapper[4932]: E0218 19:53:58.307619 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-httpd" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.307656 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-httpd" Feb 18 19:53:58 crc kubenswrapper[4932]: E0218 19:53:58.307716 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-log" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.307726 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-log" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.308405 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-log" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.308442 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-httpd" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.335659 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.335816 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.338663 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.340521 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.428667 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.503891 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.504190 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.504356 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.504431 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-logs\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.504712 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.504740 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4vtw\" (UniqueName: \"kubernetes.io/projected/bdfd208a-d781-4471-aa15-5fcbb592ec07-kube-api-access-m4vtw\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.504771 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.504828 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.551820 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607297 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607360 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607384 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607407 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-logs\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607500 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607528 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4vtw\" (UniqueName: \"kubernetes.io/projected/bdfd208a-d781-4471-aa15-5fcbb592ec07-kube-api-access-m4vtw\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607554 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607585 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607681 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.610650 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.611718 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-logs\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.621424 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.624973 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.625066 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.626046 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.644310 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4vtw\" (UniqueName: \"kubernetes.io/projected/bdfd208a-d781-4471-aa15-5fcbb592ec07-kube-api-access-m4vtw\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.679258 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.687958 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:59 crc kubenswrapper[4932]: I0218 19:53:59.134244 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67750e31-ed62-4908-9b56-3a46be936224","Type":"ContainerStarted","Data":"49ded8c61eff3d7eb04054517499be8ecf50df374bdd44a32ed528213544141a"} Feb 18 19:53:59 crc kubenswrapper[4932]: I0218 19:53:59.135578 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="96fe12c6-435c-4ef9-a340-c15cd050d898" containerName="watcher-decision-engine" containerID="cri-o://dbdae9f53819c07d29e95823430d3cc7a7fe94e92688f6b0895ae6c060733453" gracePeriod=30 Feb 18 19:53:59 crc kubenswrapper[4932]: I0218 19:53:59.208339 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" path="/var/lib/kubelet/pods/4ef7f755-fa76-4e5c-8689-06727a6a9204/volumes" Feb 18 19:53:59 crc kubenswrapper[4932]: I0218 19:53:59.366352 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:59 crc kubenswrapper[4932]: I0218 19:53:59.743673 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.001204 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.150:9322/\": read tcp 10.217.0.2:38566->10.217.0.150:9322: read: connection reset by peer" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.168475 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67750e31-ed62-4908-9b56-3a46be936224","Type":"ContainerStarted","Data":"58449f068ea443fd840aa17c5a640ee0e5ae861f046a6ea06594d638db518b63"} Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.175047 4932 generic.go:334] "Generic (PLEG): container finished" podID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerID="3bb5786715d4653ff11b29e662c3a16b899ce26d5a3ffbce47843577ab6828a2" exitCode=0 Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.175101 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"efabc52d-6f3c-4442-9b80-09577d6d5ed7","Type":"ContainerDied","Data":"3bb5786715d4653ff11b29e662c3a16b899ce26d5a3ffbce47843577ab6828a2"} Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.182510 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" containerID="cri-o://fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" gracePeriod=30 Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.182654 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bdfd208a-d781-4471-aa15-5fcbb592ec07","Type":"ContainerStarted","Data":"1fd189f5734df90d29419c8abecc4af71db32a09c9c7fb47958213aa32db2369"} Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.356263 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.477493 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrlj7\" (UniqueName: \"kubernetes.io/projected/efabc52d-6f3c-4442-9b80-09577d6d5ed7-kube-api-access-nrlj7\") pod \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.477536 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-custom-prometheus-ca\") pod \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.477565 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-config-data\") pod \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.477697 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efabc52d-6f3c-4442-9b80-09577d6d5ed7-logs\") pod \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.477743 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-combined-ca-bundle\") pod \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.484664 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efabc52d-6f3c-4442-9b80-09577d6d5ed7-logs" (OuterVolumeSpecName: "logs") pod "efabc52d-6f3c-4442-9b80-09577d6d5ed7" (UID: "efabc52d-6f3c-4442-9b80-09577d6d5ed7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.508557 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efabc52d-6f3c-4442-9b80-09577d6d5ed7-kube-api-access-nrlj7" (OuterVolumeSpecName: "kube-api-access-nrlj7") pod "efabc52d-6f3c-4442-9b80-09577d6d5ed7" (UID: "efabc52d-6f3c-4442-9b80-09577d6d5ed7"). InnerVolumeSpecName "kube-api-access-nrlj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.525356 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "efabc52d-6f3c-4442-9b80-09577d6d5ed7" (UID: "efabc52d-6f3c-4442-9b80-09577d6d5ed7"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.536398 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efabc52d-6f3c-4442-9b80-09577d6d5ed7" (UID: "efabc52d-6f3c-4442-9b80-09577d6d5ed7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.579932 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efabc52d-6f3c-4442-9b80-09577d6d5ed7-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.579963 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.579974 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrlj7\" (UniqueName: \"kubernetes.io/projected/efabc52d-6f3c-4442-9b80-09577d6d5ed7-kube-api-access-nrlj7\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.579984 4932 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.606100 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-config-data" (OuterVolumeSpecName: "config-data") pod "efabc52d-6f3c-4442-9b80-09577d6d5ed7" (UID: "efabc52d-6f3c-4442-9b80-09577d6d5ed7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.682453 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.200547 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67750e31-ed62-4908-9b56-3a46be936224","Type":"ContainerStarted","Data":"fd324d05ae668c3f684220e361c41b6ff46379462c08ea7c413014fe4a371e37"} Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.208199 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"efabc52d-6f3c-4442-9b80-09577d6d5ed7","Type":"ContainerDied","Data":"2d6fd36cf5810909c88050cc20c15f847b1b0069bc0b2e13fc22cf63d5c5c033"} Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.208244 4932 scope.go:117] "RemoveContainer" containerID="3bb5786715d4653ff11b29e662c3a16b899ce26d5a3ffbce47843577ab6828a2" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.208325 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.216907 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bdfd208a-d781-4471-aa15-5fcbb592ec07","Type":"ContainerStarted","Data":"99077386a5dc37e2145b33681651b019f28beed715374edd046c2366a76b2af6"} Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.216936 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bdfd208a-d781-4471-aa15-5fcbb592ec07","Type":"ContainerStarted","Data":"ec4505e85a78c60e725484af01a4d51a03ebf66c4a5ad9b030f60b812e85e4e3"} Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.224767 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.224755 podStartE2EDuration="4.224755s" podCreationTimestamp="2026-02-18 19:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:01.221325546 +0000 UTC m=+1204.803280391" watchObservedRunningTime="2026-02-18 19:54:01.224755 +0000 UTC m=+1204.806709845" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.259614 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.259595878 podStartE2EDuration="3.259595878s" podCreationTimestamp="2026-02-18 19:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:01.246822814 +0000 UTC m=+1204.828777659" watchObservedRunningTime="2026-02-18 19:54:01.259595878 +0000 UTC m=+1204.841550723" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.273954 4932 scope.go:117] "RemoveContainer" containerID="21f540805a94ed439a7fc5568d03546bf5918b51410b050c0717633a77e5be9d" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.297827 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.339191 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.371551 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:01 crc kubenswrapper[4932]: E0218 19:54:01.372008 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api-log" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.372026 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api-log" Feb 18 19:54:01 crc kubenswrapper[4932]: E0218 19:54:01.372039 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.372045 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.372293 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api-log" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.372311 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.373272 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.375771 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.380721 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.508107 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.508564 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b1eaea-5735-4c71-9c13-83bbece4cb4a-logs\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.508618 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-config-data\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.508744 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fdcl\" (UniqueName: \"kubernetes.io/projected/58b1eaea-5735-4c71-9c13-83bbece4cb4a-kube-api-access-5fdcl\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.508780 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.610414 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fdcl\" (UniqueName: \"kubernetes.io/projected/58b1eaea-5735-4c71-9c13-83bbece4cb4a-kube-api-access-5fdcl\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.610471 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.610521 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.610592 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b1eaea-5735-4c71-9c13-83bbece4cb4a-logs\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.610647 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-config-data\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.611312 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b1eaea-5735-4c71-9c13-83bbece4cb4a-logs\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.615476 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-config-data\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.615787 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.631712 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fdcl\" (UniqueName: \"kubernetes.io/projected/58b1eaea-5735-4c71-9c13-83bbece4cb4a-kube-api-access-5fdcl\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.635327 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.696982 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:02 crc kubenswrapper[4932]: I0218 19:54:02.238825 4932 generic.go:334] "Generic (PLEG): container finished" podID="300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" containerID="e02396c72df7f91c2b9a6adb3ff52d02133d145e009ed0755b0356a1da74ee73" exitCode=0 Feb 18 19:54:02 crc kubenswrapper[4932]: I0218 19:54:02.238912 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vldrp" event={"ID":"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd","Type":"ContainerDied","Data":"e02396c72df7f91c2b9a6adb3ff52d02133d145e009ed0755b0356a1da74ee73"} Feb 18 19:54:02 crc kubenswrapper[4932]: I0218 19:54:02.289293 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:02 crc kubenswrapper[4932]: E0218 19:54:02.453790 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:02 crc kubenswrapper[4932]: E0218 19:54:02.455585 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:02 crc kubenswrapper[4932]: E0218 19:54:02.456584 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:02 crc kubenswrapper[4932]: E0218 19:54:02.456614 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:02 crc kubenswrapper[4932]: I0218 19:54:02.979351 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:54:03 crc kubenswrapper[4932]: I0218 19:54:03.061477 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d589bd999-klfsc"] Feb 18 19:54:03 crc kubenswrapper[4932]: I0218 19:54:03.062030 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" podUID="93b88bfc-e293-4af3-a085-184607bf9327" containerName="dnsmasq-dns" containerID="cri-o://f52951b30b5592f2aeb5eae2773bb2ba20887b8705143fd09cf41ec26c0f786e" gracePeriod=10 Feb 18 19:54:03 crc kubenswrapper[4932]: I0218 19:54:03.199629 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" path="/var/lib/kubelet/pods/efabc52d-6f3c-4442-9b80-09577d6d5ed7/volumes" Feb 18 19:54:03 crc kubenswrapper[4932]: I0218 19:54:03.251292 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"58b1eaea-5735-4c71-9c13-83bbece4cb4a","Type":"ContainerStarted","Data":"914e01d59497860d56f1ffb2b4e5a0d0f4b154e5f7add6b40136f9f6dda7044e"} Feb 18 19:54:03 crc kubenswrapper[4932]: I0218 19:54:03.251331 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"58b1eaea-5735-4c71-9c13-83bbece4cb4a","Type":"ContainerStarted","Data":"f81f3e519e272ec341248a6b7ba9a38b40c5833968d66d807fd43af06ff4634a"} Feb 18 19:54:04 crc kubenswrapper[4932]: I0218 19:54:04.261550 4932 generic.go:334] "Generic (PLEG): container finished" podID="30efc86e-0c26-42e4-b907-1d4d985912ed" containerID="d60abba7265ba14494902810d1153e145d30148ef253f739d8bb7a9a9675f1f8" exitCode=0 Feb 18 19:54:04 crc kubenswrapper[4932]: I0218 19:54:04.261786 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-df7zx" event={"ID":"30efc86e-0c26-42e4-b907-1d4d985912ed","Type":"ContainerDied","Data":"d60abba7265ba14494902810d1153e145d30148ef253f739d8bb7a9a9675f1f8"} Feb 18 19:54:04 crc kubenswrapper[4932]: I0218 19:54:04.265341 4932 generic.go:334] "Generic (PLEG): container finished" podID="93b88bfc-e293-4af3-a085-184607bf9327" containerID="f52951b30b5592f2aeb5eae2773bb2ba20887b8705143fd09cf41ec26c0f786e" exitCode=0 Feb 18 19:54:04 crc kubenswrapper[4932]: I0218 19:54:04.265399 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" event={"ID":"93b88bfc-e293-4af3-a085-184607bf9327","Type":"ContainerDied","Data":"f52951b30b5592f2aeb5eae2773bb2ba20887b8705143fd09cf41ec26c0f786e"} Feb 18 19:54:04 crc kubenswrapper[4932]: I0218 19:54:04.270476 4932 generic.go:334] "Generic (PLEG): container finished" podID="96fe12c6-435c-4ef9-a340-c15cd050d898" containerID="dbdae9f53819c07d29e95823430d3cc7a7fe94e92688f6b0895ae6c060733453" exitCode=1 Feb 18 19:54:04 crc kubenswrapper[4932]: I0218 19:54:04.270523 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"96fe12c6-435c-4ef9-a340-c15cd050d898","Type":"ContainerDied","Data":"dbdae9f53819c07d29e95823430d3cc7a7fe94e92688f6b0895ae6c060733453"} Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.758922 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.759369 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.761608 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-75df984768-5mv9k" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.810253 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.810302 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.827977 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6877c868f8-jvwwn" podUID="90dd0ecb-25a6-463a-a0d8-187c5c5478c5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.837549 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-df7zx" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.847064 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.859583 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925285 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-combined-ca-bundle\") pod \"96fe12c6-435c-4ef9-a340-c15cd050d898\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925337 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-credential-keys\") pod \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925408 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-custom-prometheus-ca\") pod \"96fe12c6-435c-4ef9-a340-c15cd050d898\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925450 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-config-data\") pod \"30efc86e-0c26-42e4-b907-1d4d985912ed\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925498 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-scripts\") pod \"30efc86e-0c26-42e4-b907-1d4d985912ed\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925522 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30efc86e-0c26-42e4-b907-1d4d985912ed-logs\") pod \"30efc86e-0c26-42e4-b907-1d4d985912ed\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925544 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l4dr\" (UniqueName: \"kubernetes.io/projected/30efc86e-0c26-42e4-b907-1d4d985912ed-kube-api-access-5l4dr\") pod \"30efc86e-0c26-42e4-b907-1d4d985912ed\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925577 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-config-data\") pod \"96fe12c6-435c-4ef9-a340-c15cd050d898\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925597 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-combined-ca-bundle\") pod \"30efc86e-0c26-42e4-b907-1d4d985912ed\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925659 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv8fx\" (UniqueName: \"kubernetes.io/projected/96fe12c6-435c-4ef9-a340-c15cd050d898-kube-api-access-wv8fx\") pod \"96fe12c6-435c-4ef9-a340-c15cd050d898\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925702 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-fernet-keys\") pod \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925733 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96fe12c6-435c-4ef9-a340-c15cd050d898-logs\") pod \"96fe12c6-435c-4ef9-a340-c15cd050d898\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925796 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k42qv\" (UniqueName: \"kubernetes.io/projected/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-kube-api-access-k42qv\") pod \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925821 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-scripts\") pod \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925841 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-combined-ca-bundle\") pod \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925913 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-config-data\") pod \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.927395 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96fe12c6-435c-4ef9-a340-c15cd050d898-logs" (OuterVolumeSpecName: "logs") pod "96fe12c6-435c-4ef9-a340-c15cd050d898" (UID: "96fe12c6-435c-4ef9-a340-c15cd050d898"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.937582 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30efc86e-0c26-42e4-b907-1d4d985912ed-logs" (OuterVolumeSpecName: "logs") pod "30efc86e-0c26-42e4-b907-1d4d985912ed" (UID: "30efc86e-0c26-42e4-b907-1d4d985912ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.950810 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" (UID: "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.954425 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96fe12c6-435c-4ef9-a340-c15cd050d898-kube-api-access-wv8fx" (OuterVolumeSpecName: "kube-api-access-wv8fx") pod "96fe12c6-435c-4ef9-a340-c15cd050d898" (UID: "96fe12c6-435c-4ef9-a340-c15cd050d898"). InnerVolumeSpecName "kube-api-access-wv8fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.955593 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-scripts" (OuterVolumeSpecName: "scripts") pod "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" (UID: "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.958071 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30efc86e-0c26-42e4-b907-1d4d985912ed-kube-api-access-5l4dr" (OuterVolumeSpecName: "kube-api-access-5l4dr") pod "30efc86e-0c26-42e4-b907-1d4d985912ed" (UID: "30efc86e-0c26-42e4-b907-1d4d985912ed"). InnerVolumeSpecName "kube-api-access-5l4dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.962560 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-kube-api-access-k42qv" (OuterVolumeSpecName: "kube-api-access-k42qv") pod "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" (UID: "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd"). InnerVolumeSpecName "kube-api-access-k42qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.979222 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-scripts" (OuterVolumeSpecName: "scripts") pod "30efc86e-0c26-42e4-b907-1d4d985912ed" (UID: "30efc86e-0c26-42e4-b907-1d4d985912ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.038381 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" (UID: "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.040832 4932 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.041347 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.076935 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30efc86e-0c26-42e4-b907-1d4d985912ed-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.077373 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l4dr\" (UniqueName: \"kubernetes.io/projected/30efc86e-0c26-42e4-b907-1d4d985912ed-kube-api-access-5l4dr\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.077493 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv8fx\" (UniqueName: \"kubernetes.io/projected/96fe12c6-435c-4ef9-a340-c15cd050d898-kube-api-access-wv8fx\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.077589 4932 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.077644 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96fe12c6-435c-4ef9-a340-c15cd050d898-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.077702 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k42qv\" (UniqueName: \"kubernetes.io/projected/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-kube-api-access-k42qv\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.077753 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.040905 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-config-data" (OuterVolumeSpecName: "config-data") pod "30efc86e-0c26-42e4-b907-1d4d985912ed" (UID: "30efc86e-0c26-42e4-b907-1d4d985912ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.086602 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-config-data" (OuterVolumeSpecName: "config-data") pod "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" (UID: "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.088419 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" (UID: "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.096562 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.104930 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30efc86e-0c26-42e4-b907-1d4d985912ed" (UID: "30efc86e-0c26-42e4-b907-1d4d985912ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.141712 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96fe12c6-435c-4ef9-a340-c15cd050d898" (UID: "96fe12c6-435c-4ef9-a340-c15cd050d898"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.148898 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-config-data" (OuterVolumeSpecName: "config-data") pod "96fe12c6-435c-4ef9-a340-c15cd050d898" (UID: "96fe12c6-435c-4ef9-a340-c15cd050d898"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.150845 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "96fe12c6-435c-4ef9-a340-c15cd050d898" (UID: "96fe12c6-435c-4ef9-a340-c15cd050d898"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.180514 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-dns-svc\") pod \"93b88bfc-e293-4af3-a085-184607bf9327\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.180599 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-config\") pod \"93b88bfc-e293-4af3-a085-184607bf9327\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.180695 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2b4k\" (UniqueName: \"kubernetes.io/projected/93b88bfc-e293-4af3-a085-184607bf9327-kube-api-access-j2b4k\") pod \"93b88bfc-e293-4af3-a085-184607bf9327\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.180807 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-nb\") pod \"93b88bfc-e293-4af3-a085-184607bf9327\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.180874 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-sb\") pod \"93b88bfc-e293-4af3-a085-184607bf9327\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.181429 4932 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.181492 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.181502 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.181510 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.181518 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.181526 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.181536 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.199783 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93b88bfc-e293-4af3-a085-184607bf9327-kube-api-access-j2b4k" (OuterVolumeSpecName: "kube-api-access-j2b4k") pod "93b88bfc-e293-4af3-a085-184607bf9327" (UID: "93b88bfc-e293-4af3-a085-184607bf9327"). InnerVolumeSpecName "kube-api-access-j2b4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.283445 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2b4k\" (UniqueName: \"kubernetes.io/projected/93b88bfc-e293-4af3-a085-184607bf9327-kube-api-access-j2b4k\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.299392 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"96fe12c6-435c-4ef9-a340-c15cd050d898","Type":"ContainerDied","Data":"50a26765d82393ad4f763879251ca7f0c251c1c50f74af99544b44224a950233"} Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.299451 4932 scope.go:117] "RemoveContainer" containerID="dbdae9f53819c07d29e95823430d3cc7a7fe94e92688f6b0895ae6c060733453" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.299588 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.308034 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-df7zx" event={"ID":"30efc86e-0c26-42e4-b907-1d4d985912ed","Type":"ContainerDied","Data":"7657bac55ccacde4594557141b7b117e70c960cb0019ec4ad053450683538da6"} Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.308077 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7657bac55ccacde4594557141b7b117e70c960cb0019ec4ad053450683538da6" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.308459 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-df7zx" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.322072 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vldrp" event={"ID":"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd","Type":"ContainerDied","Data":"8e36575f312ac74d40b63b16208afa722288494a47294670fd9808ea408dc232"} Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.322112 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e36575f312ac74d40b63b16208afa722288494a47294670fd9808ea408dc232" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.322079 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.336716 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.352481 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cpzcj" event={"ID":"43f771cb-173f-4939-b1d1-e7d1b21834cb","Type":"ContainerStarted","Data":"49092cff964806110781a1ce6f40a2126d58bcb45c2544f984759951802714c3"} Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.366932 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.384784 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"58b1eaea-5735-4c71-9c13-83bbece4cb4a","Type":"ContainerStarted","Data":"7e4457ed87ab79af627e36e59e08bc3082309f2331d169ed0c87ae852f7b68d1"} Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.385421 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.385450 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.385817 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30efc86e-0c26-42e4-b907-1d4d985912ed" containerName="placement-db-sync" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.385832 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="30efc86e-0c26-42e4-b907-1d4d985912ed" containerName="placement-db-sync" Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.385849 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96fe12c6-435c-4ef9-a340-c15cd050d898" containerName="watcher-decision-engine" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.385854 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96fe12c6-435c-4ef9-a340-c15cd050d898" containerName="watcher-decision-engine" Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.385865 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93b88bfc-e293-4af3-a085-184607bf9327" containerName="init" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.385871 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="93b88bfc-e293-4af3-a085-184607bf9327" containerName="init" Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.385886 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" containerName="keystone-bootstrap" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.385892 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" containerName="keystone-bootstrap" Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.385907 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93b88bfc-e293-4af3-a085-184607bf9327" containerName="dnsmasq-dns" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.385913 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="93b88bfc-e293-4af3-a085-184607bf9327" containerName="dnsmasq-dns" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.386087 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" containerName="keystone-bootstrap" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.386099 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="30efc86e-0c26-42e4-b907-1d4d985912ed" containerName="placement-db-sync" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.386109 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="93b88bfc-e293-4af3-a085-184607bf9327" containerName="dnsmasq-dns" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.386118 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96fe12c6-435c-4ef9-a340-c15cd050d898" containerName="watcher-decision-engine" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.386745 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.390094 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": dial tcp 10.217.0.169:9322: connect: connection refused" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.390445 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.401825 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" event={"ID":"93b88bfc-e293-4af3-a085-184607bf9327","Type":"ContainerDied","Data":"4479d0a19d18775cbdda9e3e29eb2fb3a08c6720c8c950eb49addc462844cb3a"} Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.401884 4932 scope.go:117] "RemoveContainer" containerID="f52951b30b5592f2aeb5eae2773bb2ba20887b8705143fd09cf41ec26c0f786e" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.402100 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.408867 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.410556 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-cpzcj" podStartSLOduration=4.655847196 podStartE2EDuration="41.410522229s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="2026-02-18 19:53:30.098610264 +0000 UTC m=+1173.680565109" lastFinishedPulling="2026-02-18 19:54:06.853285297 +0000 UTC m=+1210.435240142" observedRunningTime="2026-02-18 19:54:07.375527377 +0000 UTC m=+1210.957482222" watchObservedRunningTime="2026-02-18 19:54:07.410522229 +0000 UTC m=+1210.992477074" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.430820 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93b88bfc-e293-4af3-a085-184607bf9327" (UID: "93b88bfc-e293-4af3-a085-184607bf9327"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.430977 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93b88bfc-e293-4af3-a085-184607bf9327" (UID: "93b88bfc-e293-4af3-a085-184607bf9327"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.448322 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=6.448298509 podStartE2EDuration="6.448298509s" podCreationTimestamp="2026-02-18 19:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:07.411599885 +0000 UTC m=+1210.993554730" watchObservedRunningTime="2026-02-18 19:54:07.448298509 +0000 UTC m=+1211.030253364" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.451894 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93b88bfc-e293-4af3-a085-184607bf9327" (UID: "93b88bfc-e293-4af3-a085-184607bf9327"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.453072 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-config" (OuterVolumeSpecName: "config") pod "93b88bfc-e293-4af3-a085-184607bf9327" (UID: "93b88bfc-e293-4af3-a085-184607bf9327"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.454823 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.457151 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.458743 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.458777 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.468022 4932 scope.go:117] "RemoveContainer" containerID="a93d81ac35fef706c2981873bb26c1272af93758393d8995dcc39345d9e18399" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487182 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487230 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tb6k\" (UniqueName: \"kubernetes.io/projected/0882c686-1b07-4ac7-a6be-148eff7faa19-kube-api-access-9tb6k\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487264 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487290 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0882c686-1b07-4ac7-a6be-148eff7faa19-logs\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487375 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487732 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487747 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487757 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487766 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.547257 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.547307 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.589287 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.589338 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tb6k\" (UniqueName: \"kubernetes.io/projected/0882c686-1b07-4ac7-a6be-148eff7faa19-kube-api-access-9tb6k\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.589370 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.589394 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0882c686-1b07-4ac7-a6be-148eff7faa19-logs\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.589414 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.592129 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0882c686-1b07-4ac7-a6be-148eff7faa19-logs\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.594046 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.594792 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.601440 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.607835 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tb6k\" (UniqueName: \"kubernetes.io/projected/0882c686-1b07-4ac7-a6be-148eff7faa19-kube-api-access-9tb6k\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.611979 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.612478 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.739444 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.903852 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d589bd999-klfsc"] Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.916217 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d589bd999-klfsc"] Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.045474 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-dc76b87d8-4l7z8"] Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.048555 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.054806 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.055016 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.055039 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.070135 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-s8zmw" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.070344 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.092556 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dc76b87d8-4l7z8"] Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.114085 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5dc9dbf7f4-c6vxb"] Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.116161 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.119683 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.119683 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.120338 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.120580 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.120692 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.120946 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sk7x7" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.121142 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-config-data\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.121182 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cc3d08-5639-4155-bee3-b1f461184a24-logs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.121200 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-combined-ca-bundle\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.121243 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-public-tls-certs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.121448 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-internal-tls-certs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.121660 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq2zn\" (UniqueName: \"kubernetes.io/projected/86cc3d08-5639-4155-bee3-b1f461184a24-kube-api-access-hq2zn\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.121713 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-scripts\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.130157 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dc9dbf7f4-c6vxb"] Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223386 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-scripts\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223477 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-internal-tls-certs\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223551 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-config-data\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223579 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cc3d08-5639-4155-bee3-b1f461184a24-logs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223600 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-combined-ca-bundle\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223634 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-public-tls-certs\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223688 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgt4b\" (UniqueName: \"kubernetes.io/projected/5742307d-705d-4197-bab4-53ec94801b4d-kube-api-access-bgt4b\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223711 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-credential-keys\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223733 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-public-tls-certs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223780 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-internal-tls-certs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223809 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-fernet-keys\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223841 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-config-data\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223885 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-combined-ca-bundle\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223943 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq2zn\" (UniqueName: \"kubernetes.io/projected/86cc3d08-5639-4155-bee3-b1f461184a24-kube-api-access-hq2zn\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223981 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-scripts\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.224708 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cc3d08-5639-4155-bee3-b1f461184a24-logs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.231770 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-config-data\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.232047 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-scripts\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.232212 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-combined-ca-bundle\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.234469 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-internal-tls-certs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.241926 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq2zn\" (UniqueName: \"kubernetes.io/projected/86cc3d08-5639-4155-bee3-b1f461184a24-kube-api-access-hq2zn\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.242209 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-public-tls-certs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.268382 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325350 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-scripts\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325404 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-internal-tls-certs\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325452 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-public-tls-certs\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325487 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgt4b\" (UniqueName: \"kubernetes.io/projected/5742307d-705d-4197-bab4-53ec94801b4d-kube-api-access-bgt4b\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325505 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-credential-keys\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325544 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-fernet-keys\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325559 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-config-data\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325582 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-combined-ca-bundle\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.330157 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-combined-ca-bundle\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.336849 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-scripts\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.337333 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-config-data\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.339072 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-public-tls-certs\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.341313 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-fernet-keys\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.343335 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-credential-keys\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.343630 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-internal-tls-certs\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.346051 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgt4b\" (UniqueName: \"kubernetes.io/projected/5742307d-705d-4197-bab4-53ec94801b4d-kube-api-access-bgt4b\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.385869 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.451489 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.452906 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerStarted","Data":"04f5dff2832c6635da78aa840490b39a4906ea50c8d89ba21f85a3c5474f7c9b"} Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.465314 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerStarted","Data":"41bbf0004efdd834aa04334d49096cca68b41c5d9f117836f2c8dc6fe6f5d5be"} Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.465370 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.465386 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.689260 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.689762 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.787198 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.804073 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dc76b87d8-4l7z8"] Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.811058 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.067219 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dc9dbf7f4-c6vxb"] Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.196317 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93b88bfc-e293-4af3-a085-184607bf9327" path="/var/lib/kubelet/pods/93b88bfc-e293-4af3-a085-184607bf9327/volumes" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.196939 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96fe12c6-435c-4ef9-a340-c15cd050d898" path="/var/lib/kubelet/pods/96fe12c6-435c-4ef9-a340-c15cd050d898/volumes" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.487731 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerStarted","Data":"4de093a700139da46d3f66815f5051f7a579a847fbfe3c9c9fef66a2d56e8e8c"} Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.497657 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc76b87d8-4l7z8" event={"ID":"86cc3d08-5639-4155-bee3-b1f461184a24","Type":"ContainerStarted","Data":"39753a946eb8a2d631a153f1f2e754fb36ce9fa30fb838383236eaf4f306d8fd"} Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.497699 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc76b87d8-4l7z8" event={"ID":"86cc3d08-5639-4155-bee3-b1f461184a24","Type":"ContainerStarted","Data":"e0a8661d91abe6650a2644a6bbb68f5a9be137c080d73c71bfeeedb79d7a94d1"} Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.497711 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc76b87d8-4l7z8" event={"ID":"86cc3d08-5639-4155-bee3-b1f461184a24","Type":"ContainerStarted","Data":"f46833318ddc8961d6f04764c058cb88d8c7c195fabe7b752747972666313452"} Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.498452 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.498477 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.501616 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dc9dbf7f4-c6vxb" event={"ID":"5742307d-705d-4197-bab4-53ec94801b4d","Type":"ContainerStarted","Data":"fbac170756bef77569ea71ae6716558e3b3fc9b88cbfad79c6fd09a46e1aab16"} Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.501641 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dc9dbf7f4-c6vxb" event={"ID":"5742307d-705d-4197-bab4-53ec94801b4d","Type":"ContainerStarted","Data":"eee41656359fb6205565b7a7e83c3774339a7dc7f5af7f3790ebb1fed632c786"} Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.503465 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.503622 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.514742 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.514720114 podStartE2EDuration="2.514720114s" podCreationTimestamp="2026-02-18 19:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:09.501423806 +0000 UTC m=+1213.083378661" watchObservedRunningTime="2026-02-18 19:54:09.514720114 +0000 UTC m=+1213.096674969" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.521219 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-dc76b87d8-4l7z8" podStartSLOduration=2.521204563 podStartE2EDuration="2.521204563s" podCreationTimestamp="2026-02-18 19:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:09.518604589 +0000 UTC m=+1213.100559434" watchObservedRunningTime="2026-02-18 19:54:09.521204563 +0000 UTC m=+1213.103159398" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.545124 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5dc9dbf7f4-c6vxb" podStartSLOduration=1.5451058020000001 podStartE2EDuration="1.545105802s" podCreationTimestamp="2026-02-18 19:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:09.541663117 +0000 UTC m=+1213.123617962" watchObservedRunningTime="2026-02-18 19:54:09.545105802 +0000 UTC m=+1213.127060647" Feb 18 19:54:10 crc kubenswrapper[4932]: I0218 19:54:10.509590 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:54:10 crc kubenswrapper[4932]: I0218 19:54:10.509824 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:54:10 crc kubenswrapper[4932]: I0218 19:54:10.512214 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.098876 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.100842 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.530109 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nqxxn" event={"ID":"3f831817-b833-4ee3-b1e9-77d9c02416ed","Type":"ContainerStarted","Data":"80213ebfed248f23a59e2cc3d7242b684303a348ef8453068ab05718b9f4df29"} Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.530411 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.530436 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.553584 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-nqxxn" podStartSLOduration=4.791997867 podStartE2EDuration="45.553562998s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="2026-02-18 19:53:28.577180409 +0000 UTC m=+1172.159135254" lastFinishedPulling="2026-02-18 19:54:09.33874554 +0000 UTC m=+1212.920700385" observedRunningTime="2026-02-18 19:54:11.549676162 +0000 UTC m=+1215.131631007" watchObservedRunningTime="2026-02-18 19:54:11.553562998 +0000 UTC m=+1215.135517843" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.697943 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.697992 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.987870 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.988447 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.279450 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85d5f6489d-gxmwz"] Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.280867 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.334229 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85d5f6489d-gxmwz"] Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.451025 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-public-tls-certs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.451070 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-scripts\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.451096 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-internal-tls-certs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.451209 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/394a5313-f592-47b5-92ce-5f87a10335d7-logs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.451427 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dr5h\" (UniqueName: \"kubernetes.io/projected/394a5313-f592-47b5-92ce-5f87a10335d7-kube-api-access-4dr5h\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.451502 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-config-data\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.451663 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-combined-ca-bundle\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: E0218 19:54:12.457556 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:12 crc kubenswrapper[4932]: E0218 19:54:12.460803 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:12 crc kubenswrapper[4932]: E0218 19:54:12.464028 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:12 crc kubenswrapper[4932]: E0218 19:54:12.464085 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.542928 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554034 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dr5h\" (UniqueName: \"kubernetes.io/projected/394a5313-f592-47b5-92ce-5f87a10335d7-kube-api-access-4dr5h\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554125 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-config-data\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554233 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-combined-ca-bundle\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554288 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-public-tls-certs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554311 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-scripts\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554336 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-internal-tls-certs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554400 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/394a5313-f592-47b5-92ce-5f87a10335d7-logs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554992 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/394a5313-f592-47b5-92ce-5f87a10335d7-logs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.559771 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-scripts\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.561237 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-internal-tls-certs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.561848 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-combined-ca-bundle\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.572654 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-public-tls-certs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.576786 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-config-data\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.577849 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dr5h\" (UniqueName: \"kubernetes.io/projected/394a5313-f592-47b5-92ce-5f87a10335d7-kube-api-access-4dr5h\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.629034 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:13 crc kubenswrapper[4932]: I0218 19:54:13.240462 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85d5f6489d-gxmwz"] Feb 18 19:54:13 crc kubenswrapper[4932]: I0218 19:54:13.310600 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:13 crc kubenswrapper[4932]: I0218 19:54:13.310701 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:54:13 crc kubenswrapper[4932]: I0218 19:54:13.320949 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:13 crc kubenswrapper[4932]: I0218 19:54:13.566932 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85d5f6489d-gxmwz" event={"ID":"394a5313-f592-47b5-92ce-5f87a10335d7","Type":"ContainerStarted","Data":"212201d63b6b8e58f239be487bb94cbe807f228155c6382d5e410092510cb942"} Feb 18 19:54:13 crc kubenswrapper[4932]: I0218 19:54:13.567196 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85d5f6489d-gxmwz" event={"ID":"394a5313-f592-47b5-92ce-5f87a10335d7","Type":"ContainerStarted","Data":"ce6d6eddda8600e13002c7cc13064dfd297cc5341490d80518ec2257b4ef9593"} Feb 18 19:54:14 crc kubenswrapper[4932]: I0218 19:54:14.579866 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85d5f6489d-gxmwz" event={"ID":"394a5313-f592-47b5-92ce-5f87a10335d7","Type":"ContainerStarted","Data":"4d80aa8aa803e3cd96c7fe9f9a4dc873cd21201cdcec6d6cabe27f7bf9577faf"} Feb 18 19:54:14 crc kubenswrapper[4932]: I0218 19:54:14.581338 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:14 crc kubenswrapper[4932]: I0218 19:54:14.581377 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:14 crc kubenswrapper[4932]: I0218 19:54:14.584006 4932 generic.go:334] "Generic (PLEG): container finished" podID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerID="4de093a700139da46d3f66815f5051f7a579a847fbfe3c9c9fef66a2d56e8e8c" exitCode=1 Feb 18 19:54:14 crc kubenswrapper[4932]: I0218 19:54:14.584705 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerDied","Data":"4de093a700139da46d3f66815f5051f7a579a847fbfe3c9c9fef66a2d56e8e8c"} Feb 18 19:54:14 crc kubenswrapper[4932]: I0218 19:54:14.585077 4932 scope.go:117] "RemoveContainer" containerID="4de093a700139da46d3f66815f5051f7a579a847fbfe3c9c9fef66a2d56e8e8c" Feb 18 19:54:14 crc kubenswrapper[4932]: I0218 19:54:14.622448 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85d5f6489d-gxmwz" podStartSLOduration=2.622426286 podStartE2EDuration="2.622426286s" podCreationTimestamp="2026-02-18 19:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:14.621139825 +0000 UTC m=+1218.203094670" watchObservedRunningTime="2026-02-18 19:54:14.622426286 +0000 UTC m=+1218.204381131" Feb 18 19:54:15 crc kubenswrapper[4932]: I0218 19:54:15.597268 4932 generic.go:334] "Generic (PLEG): container finished" podID="43f771cb-173f-4939-b1d1-e7d1b21834cb" containerID="49092cff964806110781a1ce6f40a2126d58bcb45c2544f984759951802714c3" exitCode=0 Feb 18 19:54:15 crc kubenswrapper[4932]: I0218 19:54:15.597350 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cpzcj" event={"ID":"43f771cb-173f-4939-b1d1-e7d1b21834cb","Type":"ContainerDied","Data":"49092cff964806110781a1ce6f40a2126d58bcb45c2544f984759951802714c3"} Feb 18 19:54:16 crc kubenswrapper[4932]: I0218 19:54:16.319261 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:16 crc kubenswrapper[4932]: I0218 19:54:16.319498 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api-log" containerID="cri-o://914e01d59497860d56f1ffb2b4e5a0d0f4b154e5f7add6b40136f9f6dda7044e" gracePeriod=30 Feb 18 19:54:16 crc kubenswrapper[4932]: I0218 19:54:16.319599 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api" containerID="cri-o://7e4457ed87ab79af627e36e59e08bc3082309f2331d169ed0c87ae852f7b68d1" gracePeriod=30 Feb 18 19:54:16 crc kubenswrapper[4932]: I0218 19:54:16.607149 4932 generic.go:334] "Generic (PLEG): container finished" podID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerID="914e01d59497860d56f1ffb2b4e5a0d0f4b154e5f7add6b40136f9f6dda7044e" exitCode=143 Feb 18 19:54:16 crc kubenswrapper[4932]: I0218 19:54:16.607241 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"58b1eaea-5735-4c71-9c13-83bbece4cb4a","Type":"ContainerDied","Data":"914e01d59497860d56f1ffb2b4e5a0d0f4b154e5f7add6b40136f9f6dda7044e"} Feb 18 19:54:17 crc kubenswrapper[4932]: I0218 19:54:17.173654 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": read tcp 10.217.0.2:59718->10.217.0.169:9322: read: connection reset by peer" Feb 18 19:54:17 crc kubenswrapper[4932]: I0218 19:54:17.173683 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": read tcp 10.217.0.2:59710->10.217.0.169:9322: read: connection reset by peer" Feb 18 19:54:17 crc kubenswrapper[4932]: E0218 19:54:17.452663 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:17 crc kubenswrapper[4932]: E0218 19:54:17.455799 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:17 crc kubenswrapper[4932]: E0218 19:54:17.459671 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:17 crc kubenswrapper[4932]: E0218 19:54:17.459703 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:17 crc kubenswrapper[4932]: I0218 19:54:17.621522 4932 generic.go:334] "Generic (PLEG): container finished" podID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerID="7e4457ed87ab79af627e36e59e08bc3082309f2331d169ed0c87ae852f7b68d1" exitCode=0 Feb 18 19:54:17 crc kubenswrapper[4932]: I0218 19:54:17.621565 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"58b1eaea-5735-4c71-9c13-83bbece4cb4a","Type":"ContainerDied","Data":"7e4457ed87ab79af627e36e59e08bc3082309f2331d169ed0c87ae852f7b68d1"} Feb 18 19:54:17 crc kubenswrapper[4932]: I0218 19:54:17.739947 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:17 crc kubenswrapper[4932]: I0218 19:54:17.739990 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:17 crc kubenswrapper[4932]: I0218 19:54:17.987606 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.098945 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zn59\" (UniqueName: \"kubernetes.io/projected/43f771cb-173f-4939-b1d1-e7d1b21834cb-kube-api-access-4zn59\") pod \"43f771cb-173f-4939-b1d1-e7d1b21834cb\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.099034 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-db-sync-config-data\") pod \"43f771cb-173f-4939-b1d1-e7d1b21834cb\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.099283 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-combined-ca-bundle\") pod \"43f771cb-173f-4939-b1d1-e7d1b21834cb\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.112650 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "43f771cb-173f-4939-b1d1-e7d1b21834cb" (UID: "43f771cb-173f-4939-b1d1-e7d1b21834cb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.114479 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f771cb-173f-4939-b1d1-e7d1b21834cb-kube-api-access-4zn59" (OuterVolumeSpecName: "kube-api-access-4zn59") pod "43f771cb-173f-4939-b1d1-e7d1b21834cb" (UID: "43f771cb-173f-4939-b1d1-e7d1b21834cb"). InnerVolumeSpecName "kube-api-access-4zn59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.145334 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43f771cb-173f-4939-b1d1-e7d1b21834cb" (UID: "43f771cb-173f-4939-b1d1-e7d1b21834cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.201899 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zn59\" (UniqueName: \"kubernetes.io/projected/43f771cb-173f-4939-b1d1-e7d1b21834cb-kube-api-access-4zn59\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.201940 4932 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.201953 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.604801 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.634784 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cpzcj" event={"ID":"43f771cb-173f-4939-b1d1-e7d1b21834cb","Type":"ContainerDied","Data":"5c69e97847efcde57a769daf96ea0750cda2a27a34d2c7d54166590315ebcbc1"} Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.634821 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c69e97847efcde57a769daf96ea0750cda2a27a34d2c7d54166590315ebcbc1" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.634843 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.635981 4932 generic.go:334] "Generic (PLEG): container finished" podID="c4c20fc2-cf78-41c9-9e37-c5bea35d472f" containerID="682f69e31fcb10c9b585e4fbecb1e2d4f8e82e3ec0c03204e9e0fefc1d901753" exitCode=0 Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.636006 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kfzmp" event={"ID":"c4c20fc2-cf78-41c9-9e37-c5bea35d472f","Type":"ContainerDied","Data":"682f69e31fcb10c9b585e4fbecb1e2d4f8e82e3ec0c03204e9e0fefc1d901753"} Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.761562 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.272751 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-69669cb55f-sp2x2"] Feb 18 19:54:19 crc kubenswrapper[4932]: E0218 19:54:19.273674 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f771cb-173f-4939-b1d1-e7d1b21834cb" containerName="barbican-db-sync" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.273700 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f771cb-173f-4939-b1d1-e7d1b21834cb" containerName="barbican-db-sync" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.273959 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f771cb-173f-4939-b1d1-e7d1b21834cb" containerName="barbican-db-sync" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.275549 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.281209 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.281366 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.281560 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sxgcc" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.291568 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-69669cb55f-sp2x2"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.320634 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.323643 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.326931 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.344688 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.359213 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432032 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05e25333-fed2-4944-8c7e-151c0bd6ab6c-logs\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432112 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-config-data-custom\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432140 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-config-data-custom\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432239 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-logs\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432273 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkz56\" (UniqueName: \"kubernetes.io/projected/05e25333-fed2-4944-8c7e-151c0bd6ab6c-kube-api-access-bkz56\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432310 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-combined-ca-bundle\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg48r\" (UniqueName: \"kubernetes.io/projected/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-kube-api-access-fg48r\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432359 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-combined-ca-bundle\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432428 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-config-data\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432456 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-config-data\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.448521 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d4f4bc8df-ddvv7"] Feb 18 19:54:19 crc kubenswrapper[4932]: E0218 19:54:19.448892 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.448902 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api" Feb 18 19:54:19 crc kubenswrapper[4932]: E0218 19:54:19.448926 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api-log" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.448932 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api-log" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.449126 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.449142 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api-log" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.450080 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.459739 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d4f4bc8df-ddvv7"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.533818 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-combined-ca-bundle\") pod \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.534557 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-custom-prometheus-ca\") pod \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.535016 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fdcl\" (UniqueName: \"kubernetes.io/projected/58b1eaea-5735-4c71-9c13-83bbece4cb4a-kube-api-access-5fdcl\") pod \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.535201 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-config-data\") pod \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.535392 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b1eaea-5735-4c71-9c13-83bbece4cb4a-logs\") pod \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.535833 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-config-data-custom\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.535934 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-config-data-custom\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.536095 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-logs\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.536223 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkz56\" (UniqueName: \"kubernetes.io/projected/05e25333-fed2-4944-8c7e-151c0bd6ab6c-kube-api-access-bkz56\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.537892 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-combined-ca-bundle\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.538016 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg48r\" (UniqueName: \"kubernetes.io/projected/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-kube-api-access-fg48r\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.538141 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-combined-ca-bundle\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.538353 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-config-data\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.540161 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-config-data\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.540293 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05e25333-fed2-4944-8c7e-151c0bd6ab6c-logs\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.544468 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05e25333-fed2-4944-8c7e-151c0bd6ab6c-logs\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.545366 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-config-data-custom\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.545625 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58b1eaea-5735-4c71-9c13-83bbece4cb4a-logs" (OuterVolumeSpecName: "logs") pod "58b1eaea-5735-4c71-9c13-83bbece4cb4a" (UID: "58b1eaea-5735-4c71-9c13-83bbece4cb4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.545662 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-logs\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.551691 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-config-data-custom\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.572129 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-config-data\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.575063 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b1eaea-5735-4c71-9c13-83bbece4cb4a-kube-api-access-5fdcl" (OuterVolumeSpecName: "kube-api-access-5fdcl") pod "58b1eaea-5735-4c71-9c13-83bbece4cb4a" (UID: "58b1eaea-5735-4c71-9c13-83bbece4cb4a"). InnerVolumeSpecName "kube-api-access-5fdcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.583900 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-combined-ca-bundle\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.595858 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkz56\" (UniqueName: \"kubernetes.io/projected/05e25333-fed2-4944-8c7e-151c0bd6ab6c-kube-api-access-bkz56\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.596000 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-combined-ca-bundle\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.597363 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg48r\" (UniqueName: \"kubernetes.io/projected/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-kube-api-access-fg48r\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.597466 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-config-data\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.601091 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7449c5884b-q9l4k"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.603919 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.605774 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.612416 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7449c5884b-q9l4k"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.629258 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "58b1eaea-5735-4c71-9c13-83bbece4cb4a" (UID: "58b1eaea-5735-4c71-9c13-83bbece4cb4a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.630207 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58b1eaea-5735-4c71-9c13-83bbece4cb4a" (UID: "58b1eaea-5735-4c71-9c13-83bbece4cb4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642512 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-nb\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642571 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-config\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642652 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxn82\" (UniqueName: \"kubernetes.io/projected/d6a2f5f7-e711-48ad-9455-4c9591d751a4-kube-api-access-kxn82\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642741 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-svc\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642796 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642821 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-swift-storage-0\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642954 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b1eaea-5735-4c71-9c13-83bbece4cb4a-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642972 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642986 4932 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642996 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fdcl\" (UniqueName: \"kubernetes.io/projected/58b1eaea-5735-4c71-9c13-83bbece4cb4a-kube-api-access-5fdcl\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.653130 4932 generic.go:334] "Generic (PLEG): container finished" podID="3f831817-b833-4ee3-b1e9-77d9c02416ed" containerID="80213ebfed248f23a59e2cc3d7242b684303a348ef8453068ab05718b9f4df29" exitCode=0 Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.653281 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nqxxn" event={"ID":"3f831817-b833-4ee3-b1e9-77d9c02416ed","Type":"ContainerDied","Data":"80213ebfed248f23a59e2cc3d7242b684303a348ef8453068ab05718b9f4df29"} Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.665432 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-config-data" (OuterVolumeSpecName: "config-data") pod "58b1eaea-5735-4c71-9c13-83bbece4cb4a" (UID: "58b1eaea-5735-4c71-9c13-83bbece4cb4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.667474 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"58b1eaea-5735-4c71-9c13-83bbece4cb4a","Type":"ContainerDied","Data":"f81f3e519e272ec341248a6b7ba9a38b40c5833968d66d807fd43af06ff4634a"} Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.667517 4932 scope.go:117] "RemoveContainer" containerID="7e4457ed87ab79af627e36e59e08bc3082309f2331d169ed0c87ae852f7b68d1" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.667680 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.677098 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerStarted","Data":"a6a123d69d1a4e46268f089397aa0f2920ef5932f2828721d8716a7b45e1e942"} Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.692356 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.696601 4932 scope.go:117] "RemoveContainer" containerID="914e01d59497860d56f1ffb2b4e5a0d0f4b154e5f7add6b40136f9f6dda7044e" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.717623 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.746933 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505f490e-dca8-49ae-aeeb-3392c065d841-logs\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.746986 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747007 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-swift-storage-0\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747065 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747101 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data-custom\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747144 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-nb\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747188 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-config\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747212 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-combined-ca-bundle\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747233 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxn82\" (UniqueName: \"kubernetes.io/projected/d6a2f5f7-e711-48ad-9455-4c9591d751a4-kube-api-access-kxn82\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747254 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9s68\" (UniqueName: \"kubernetes.io/projected/505f490e-dca8-49ae-aeeb-3392c065d841-kube-api-access-b9s68\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747287 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-svc\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747358 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.748152 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-svc\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.748693 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.749250 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-swift-storage-0\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.749836 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-nb\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.752521 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-config\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: E0218 19:54:19.753185 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.761250 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.769953 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxn82\" (UniqueName: \"kubernetes.io/projected/d6a2f5f7-e711-48ad-9455-4c9591d751a4-kube-api-access-kxn82\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.773079 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.785236 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.786797 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.788937 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.789364 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.789534 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.793680 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.849459 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505f490e-dca8-49ae-aeeb-3392c065d841-logs\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.849776 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.849823 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data-custom\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.849912 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-combined-ca-bundle\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.849938 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9s68\" (UniqueName: \"kubernetes.io/projected/505f490e-dca8-49ae-aeeb-3392c065d841-kube-api-access-b9s68\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.850668 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505f490e-dca8-49ae-aeeb-3392c065d841-logs\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.861779 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data-custom\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.872239 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.874091 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9s68\" (UniqueName: \"kubernetes.io/projected/505f490e-dca8-49ae-aeeb-3392c065d841-kube-api-access-b9s68\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.874642 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-combined-ca-bundle\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.925299 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.951638 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km2km\" (UniqueName: \"kubernetes.io/projected/3bab1a8c-1512-4353-90c0-b145865fc593-kube-api-access-km2km\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.951718 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.951740 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-config-data\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.951772 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bab1a8c-1512-4353-90c0-b145865fc593-logs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.952924 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.952990 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.953145 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-public-tls-certs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.955281 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.055586 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.055927 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.055956 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-public-tls-certs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.056012 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km2km\" (UniqueName: \"kubernetes.io/projected/3bab1a8c-1512-4353-90c0-b145865fc593-kube-api-access-km2km\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.056087 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.056109 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-config-data\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.056139 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bab1a8c-1512-4353-90c0-b145865fc593-logs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.056591 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bab1a8c-1512-4353-90c0-b145865fc593-logs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.062104 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.071816 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.076715 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.083922 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km2km\" (UniqueName: \"kubernetes.io/projected/3bab1a8c-1512-4353-90c0-b145865fc593-kube-api-access-km2km\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.085362 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-public-tls-certs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.092839 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-config-data\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.109477 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.155079 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.262841 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-config\") pod \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.262893 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-combined-ca-bundle\") pod \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.263050 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpwbx\" (UniqueName: \"kubernetes.io/projected/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-kube-api-access-mpwbx\") pod \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.268899 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-kube-api-access-mpwbx" (OuterVolumeSpecName: "kube-api-access-mpwbx") pod "c4c20fc2-cf78-41c9-9e37-c5bea35d472f" (UID: "c4c20fc2-cf78-41c9-9e37-c5bea35d472f"). InnerVolumeSpecName "kube-api-access-mpwbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.335322 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4c20fc2-cf78-41c9-9e37-c5bea35d472f" (UID: "c4c20fc2-cf78-41c9-9e37-c5bea35d472f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.336638 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-config" (OuterVolumeSpecName: "config") pod "c4c20fc2-cf78-41c9-9e37-c5bea35d472f" (UID: "c4c20fc2-cf78-41c9-9e37-c5bea35d472f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.364896 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.364933 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.364944 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpwbx\" (UniqueName: \"kubernetes.io/projected/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-kube-api-access-mpwbx\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.383187 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.395552 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-69669cb55f-sp2x2"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.610934 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7449c5884b-q9l4k"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.634348 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d4f4bc8df-ddvv7"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.774483 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" event={"ID":"d6a2f5f7-e711-48ad-9455-4c9591d751a4","Type":"ContainerStarted","Data":"80607fa7dffc51679b1e994f65af38dcef63e402a63bb96a1efa6d78960754ca"} Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.795888 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.799815 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerStarted","Data":"2b1e75d16cb30a9c6ffb3c5157c9587182f8a699106125481032c4efb8da098d"} Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.799980 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="ceilometer-notification-agent" containerID="cri-o://c7ee5732776c18a927c72c5ff1cc708a0c4c7cbb7be39c25d6f15f19eb006153" gracePeriod=30 Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.800225 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.800500 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="proxy-httpd" containerID="cri-o://2b1e75d16cb30a9c6ffb3c5157c9587182f8a699106125481032c4efb8da098d" gracePeriod=30 Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.800548 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="sg-core" containerID="cri-o://41bbf0004efdd834aa04334d49096cca68b41c5d9f117836f2c8dc6fe6f5d5be" gracePeriod=30 Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.807093 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-69669cb55f-sp2x2" event={"ID":"05e25333-fed2-4944-8c7e-151c0bd6ab6c","Type":"ContainerStarted","Data":"cdfbe590e51619b2f5eefc007b6ced59292930f1279dfa6fd7af6821d4acb829"} Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.825875 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d4f4bc8df-ddvv7"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.861546 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" event={"ID":"6b01fc13-e894-46fd-8f24-d9ccdbce09e0","Type":"ContainerStarted","Data":"db5629b00eceeeaf6a12066f216b0267dcb1e9ee48dd48e12f7a4e2e2d732d15"} Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.880198 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7449c5884b-q9l4k" event={"ID":"505f490e-dca8-49ae-aeeb-3392c065d841","Type":"ContainerStarted","Data":"c7d003d8c5cc0d3edc83d2a07bde218aaf6fe754f628f14115b8310796a97a1b"} Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.892491 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-697b589695-vqq6h"] Feb 18 19:54:20 crc kubenswrapper[4932]: E0218 19:54:20.892903 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c20fc2-cf78-41c9-9e37-c5bea35d472f" containerName="neutron-db-sync" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.892919 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c20fc2-cf78-41c9-9e37-c5bea35d472f" containerName="neutron-db-sync" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.893131 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c20fc2-cf78-41c9-9e37-c5bea35d472f" containerName="neutron-db-sync" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.894131 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.901808 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-697b589695-vqq6h"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.904778 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.905737 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kfzmp" event={"ID":"c4c20fc2-cf78-41c9-9e37-c5bea35d472f","Type":"ContainerDied","Data":"d6c505a399db7407167ba85b30249143bd9bde443aac40b322a8f403af6c7869"} Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.905794 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6c505a399db7407167ba85b30249143bd9bde443aac40b322a8f403af6c7869" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.983348 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-config\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.983502 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-sb\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.983607 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-swift-storage-0\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.983828 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-svc\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.984252 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svh7b\" (UniqueName: \"kubernetes.io/projected/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-kube-api-access-svh7b\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.984374 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-nb\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.987926 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5966846f96-hbrsw"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.989596 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.002197 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.003270 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.003424 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rp826" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.003534 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.010742 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5966846f96-hbrsw"] Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087332 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-config\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087376 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svh7b\" (UniqueName: \"kubernetes.io/projected/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-kube-api-access-svh7b\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087420 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-nb\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087453 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d54tn\" (UniqueName: \"kubernetes.io/projected/fb1c0405-2770-4a03-ba51-c78005d57ad9-kube-api-access-d54tn\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087520 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-config\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087561 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-sb\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087580 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-ovndb-tls-certs\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087597 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-swift-storage-0\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087643 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-httpd-config\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087682 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-svc\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087729 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-combined-ca-bundle\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.089096 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-nb\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.089945 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-swift-storage-0\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.090521 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-svc\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.091292 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-sb\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.103049 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-config\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.120875 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svh7b\" (UniqueName: \"kubernetes.io/projected/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-kube-api-access-svh7b\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.147916 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.189584 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-config\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.189654 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d54tn\" (UniqueName: \"kubernetes.io/projected/fb1c0405-2770-4a03-ba51-c78005d57ad9-kube-api-access-d54tn\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.189723 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-ovndb-tls-certs\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.189763 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-httpd-config\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.189841 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-combined-ca-bundle\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.194920 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" path="/var/lib/kubelet/pods/58b1eaea-5735-4c71-9c13-83bbece4cb4a/volumes" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.198441 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-combined-ca-bundle\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.202967 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-config\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.204221 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-httpd-config\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.206724 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d54tn\" (UniqueName: \"kubernetes.io/projected/fb1c0405-2770-4a03-ba51-c78005d57ad9-kube-api-access-d54tn\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.219982 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-ovndb-tls-certs\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.242418 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.384227 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.389968 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.482346 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75df984768-5mv9k"] Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.556928 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.703375 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-combined-ca-bundle\") pod \"3f831817-b833-4ee3-b1e9-77d9c02416ed\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.703644 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f831817-b833-4ee3-b1e9-77d9c02416ed-etc-machine-id\") pod \"3f831817-b833-4ee3-b1e9-77d9c02416ed\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.703727 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqllv\" (UniqueName: \"kubernetes.io/projected/3f831817-b833-4ee3-b1e9-77d9c02416ed-kube-api-access-qqllv\") pod \"3f831817-b833-4ee3-b1e9-77d9c02416ed\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.703755 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-config-data\") pod \"3f831817-b833-4ee3-b1e9-77d9c02416ed\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.703841 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-db-sync-config-data\") pod \"3f831817-b833-4ee3-b1e9-77d9c02416ed\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.703841 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f831817-b833-4ee3-b1e9-77d9c02416ed-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3f831817-b833-4ee3-b1e9-77d9c02416ed" (UID: "3f831817-b833-4ee3-b1e9-77d9c02416ed"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.703947 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-scripts\") pod \"3f831817-b833-4ee3-b1e9-77d9c02416ed\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.704368 4932 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f831817-b833-4ee3-b1e9-77d9c02416ed-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.714375 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3f831817-b833-4ee3-b1e9-77d9c02416ed" (UID: "3f831817-b833-4ee3-b1e9-77d9c02416ed"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.714465 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f831817-b833-4ee3-b1e9-77d9c02416ed-kube-api-access-qqllv" (OuterVolumeSpecName: "kube-api-access-qqllv") pod "3f831817-b833-4ee3-b1e9-77d9c02416ed" (UID: "3f831817-b833-4ee3-b1e9-77d9c02416ed"). InnerVolumeSpecName "kube-api-access-qqllv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.714476 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-scripts" (OuterVolumeSpecName: "scripts") pod "3f831817-b833-4ee3-b1e9-77d9c02416ed" (UID: "3f831817-b833-4ee3-b1e9-77d9c02416ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.753683 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f831817-b833-4ee3-b1e9-77d9c02416ed" (UID: "3f831817-b833-4ee3-b1e9-77d9c02416ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.778431 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-config-data" (OuterVolumeSpecName: "config-data") pod "3f831817-b833-4ee3-b1e9-77d9c02416ed" (UID: "3f831817-b833-4ee3-b1e9-77d9c02416ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.808368 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqllv\" (UniqueName: \"kubernetes.io/projected/3f831817-b833-4ee3-b1e9-77d9c02416ed-kube-api-access-qqllv\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.808400 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.808409 4932 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.808424 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.808435 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.955199 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:21 crc kubenswrapper[4932]: E0218 19:54:21.955775 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f831817-b833-4ee3-b1e9-77d9c02416ed" containerName="cinder-db-sync" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.955790 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f831817-b833-4ee3-b1e9-77d9c02416ed" containerName="cinder-db-sync" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.955981 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f831817-b833-4ee3-b1e9-77d9c02416ed" containerName="cinder-db-sync" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.956946 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.966165 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.976243 4932 generic.go:334] "Generic (PLEG): container finished" podID="d6a2f5f7-e711-48ad-9455-4c9591d751a4" containerID="e2da060f7790ff93315a845bafd7b63811c1d420398b145f60768509ea598a27" exitCode=0 Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.976307 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" event={"ID":"d6a2f5f7-e711-48ad-9455-4c9591d751a4","Type":"ContainerDied","Data":"e2da060f7790ff93315a845bafd7b63811c1d420398b145f60768509ea598a27"} Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.978094 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.021415 4932 generic.go:334] "Generic (PLEG): container finished" podID="079e3d7d-bd4f-4198-8606-95192a514c07" containerID="2b1e75d16cb30a9c6ffb3c5157c9587182f8a699106125481032c4efb8da098d" exitCode=0 Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.021447 4932 generic.go:334] "Generic (PLEG): container finished" podID="079e3d7d-bd4f-4198-8606-95192a514c07" containerID="41bbf0004efdd834aa04334d49096cca68b41c5d9f117836f2c8dc6fe6f5d5be" exitCode=2 Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.021513 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerDied","Data":"2b1e75d16cb30a9c6ffb3c5157c9587182f8a699106125481032c4efb8da098d"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.021540 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerDied","Data":"41bbf0004efdd834aa04334d49096cca68b41c5d9f117836f2c8dc6fe6f5d5be"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.052870 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-697b589695-vqq6h"] Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.110502 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7449c5884b-q9l4k" event={"ID":"505f490e-dca8-49ae-aeeb-3392c065d841","Type":"ContainerStarted","Data":"7df07bd853489d447f256acaae8700635b716bd7ed59696363bdaa6d7cf3ee38"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.110562 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7449c5884b-q9l4k" event={"ID":"505f490e-dca8-49ae-aeeb-3392c065d841","Type":"ContainerStarted","Data":"98b2099bab8a6e146a0799442e00594a5328d749f9417d06bb347ef9fb18f009"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.110972 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.111012 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.121337 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08fb57b1-f237-4913-8897-a21202273268-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.121432 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-scripts\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.121490 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfcnj\" (UniqueName: \"kubernetes.io/projected/08fb57b1-f237-4913-8897-a21202273268-kube-api-access-lfcnj\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.121509 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.121560 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.121591 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.156268 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697b589695-vqq6h"] Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.182499 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3bab1a8c-1512-4353-90c0-b145865fc593","Type":"ContainerStarted","Data":"b39b378463df4659a7d815ad559e055320c381d547ead7d0839359df1016468c"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.182548 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3bab1a8c-1512-4353-90c0-b145865fc593","Type":"ContainerStarted","Data":"d18571c0d5c05e581ab7e9d6c5c54075a1b2cf346cc6716193737fc498f14d6c"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.182561 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3bab1a8c-1512-4353-90c0-b145865fc593","Type":"ContainerStarted","Data":"0e7eb3e8f305c855ff6ec62060ea2b4e3728920ab8aae8cac868ca003e3590f4"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.183367 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.209574 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="3bab1a8c-1512-4353-90c0-b145865fc593" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.178:9322/\": dial tcp 10.217.0.178:9322: connect: connection refused" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.228615 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfcnj\" (UniqueName: \"kubernetes.io/projected/08fb57b1-f237-4913-8897-a21202273268-kube-api-access-lfcnj\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.228684 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.228793 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.228859 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.228913 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08fb57b1-f237-4913-8897-a21202273268-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.229036 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-scripts\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.239286 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nqxxn" event={"ID":"3f831817-b833-4ee3-b1e9-77d9c02416ed","Type":"ContainerDied","Data":"e5dc27f7492f1faa0455250ffd7868de8258df87b7d776e52911e76784a162ec"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.239352 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5dc27f7492f1faa0455250ffd7868de8258df87b7d776e52911e76784a162ec" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.239517 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.243188 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75df984768-5mv9k" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" containerID="cri-o://c14c2db9c2e97146ded5c1be64f375a20e4d3dc8027f2eb556b8226700b572e9" gracePeriod=30 Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.238167 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75df984768-5mv9k" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon-log" containerID="cri-o://8938c10b66b4f6d7e20437bee59ce3c16a7181c0a809f3e865b01b219862d8d7" gracePeriod=30 Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.273922 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08fb57b1-f237-4913-8897-a21202273268-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.288309 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.292598 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.295696 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-scripts\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.301843 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-855cb46c75-kwghr"] Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.307511 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.309063 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.335271 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfcnj\" (UniqueName: \"kubernetes.io/projected/08fb57b1-f237-4913-8897-a21202273268-kube-api-access-lfcnj\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.385472 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-855cb46c75-kwghr"] Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.396302 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7449c5884b-q9l4k" podStartSLOduration=3.396275381 podStartE2EDuration="3.396275381s" podCreationTimestamp="2026-02-18 19:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:22.153931474 +0000 UTC m=+1225.735886319" watchObservedRunningTime="2026-02-18 19:54:22.396275381 +0000 UTC m=+1225.978230246" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.421836 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.42181758 podStartE2EDuration="3.42181758s" podCreationTimestamp="2026-02-18 19:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:22.260968319 +0000 UTC m=+1225.842923164" watchObservedRunningTime="2026-02-18 19:54:22.42181758 +0000 UTC m=+1226.003772415" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.439668 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.441302 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.443663 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9qmk\" (UniqueName: \"kubernetes.io/projected/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-kube-api-access-b9qmk\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.443885 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-swift-storage-0\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.443921 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-config\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.444014 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-nb\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.444035 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-sb\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.444096 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-svc\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.448448 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.454478 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5966846f96-hbrsw"] Feb 18 19:54:22 crc kubenswrapper[4932]: E0218 19:54:22.460315 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:22 crc kubenswrapper[4932]: E0218 19:54:22.469291 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.469441 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:54:22 crc kubenswrapper[4932]: E0218 19:54:22.477947 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:22 crc kubenswrapper[4932]: E0218 19:54:22.478007 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.511542 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545332 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-swift-storage-0\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545369 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-config\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545391 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdllf\" (UniqueName: \"kubernetes.io/projected/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-kube-api-access-wdllf\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545417 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545441 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-nb\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545457 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-sb\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545480 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-logs\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545500 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-svc\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545536 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-scripts\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545553 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data-custom\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545579 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9qmk\" (UniqueName: \"kubernetes.io/projected/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-kube-api-access-b9qmk\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545627 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545651 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.546398 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-swift-storage-0\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.546892 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-config\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.552426 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-nb\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.552989 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-sb\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.553502 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-svc\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.586812 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9qmk\" (UniqueName: \"kubernetes.io/projected/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-kube-api-access-b9qmk\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.653273 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.653332 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.653485 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdllf\" (UniqueName: \"kubernetes.io/projected/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-kube-api-access-wdllf\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.653516 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.653550 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-logs\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.653597 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-scripts\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.653617 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data-custom\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.657298 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.657643 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-logs\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.665739 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.671913 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data-custom\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.672997 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.675498 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-scripts\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.683998 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdllf\" (UniqueName: \"kubernetes.io/projected/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-kube-api-access-wdllf\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.783503 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.816349 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:54:23 crc kubenswrapper[4932]: W0218 19:54:23.083750 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1c0405_2770_4a03_ba51_c78005d57ad9.slice/crio-b0f1a3c159b5f59fb68d5b1503c08f8c96ed4a7d57c2077fe1e9116b9b2fbf3b WatchSource:0}: Error finding container b0f1a3c159b5f59fb68d5b1503c08f8c96ed4a7d57c2077fe1e9116b9b2fbf3b: Status 404 returned error can't find the container with id b0f1a3c159b5f59fb68d5b1503c08f8c96ed4a7d57c2077fe1e9116b9b2fbf3b Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.251955 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5966846f96-hbrsw" event={"ID":"fb1c0405-2770-4a03-ba51-c78005d57ad9","Type":"ContainerStarted","Data":"b0f1a3c159b5f59fb68d5b1503c08f8c96ed4a7d57c2077fe1e9116b9b2fbf3b"} Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.694350 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.775857 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-sb\") pod \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.775900 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-config\") pod \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.775980 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-nb\") pod \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.776030 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxn82\" (UniqueName: \"kubernetes.io/projected/d6a2f5f7-e711-48ad-9455-4c9591d751a4-kube-api-access-kxn82\") pod \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.776150 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-svc\") pod \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.776200 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-swift-storage-0\") pod \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.781922 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a2f5f7-e711-48ad-9455-4c9591d751a4-kube-api-access-kxn82" (OuterVolumeSpecName: "kube-api-access-kxn82") pod "d6a2f5f7-e711-48ad-9455-4c9591d751a4" (UID: "d6a2f5f7-e711-48ad-9455-4c9591d751a4"). InnerVolumeSpecName "kube-api-access-kxn82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.804510 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d6a2f5f7-e711-48ad-9455-4c9591d751a4" (UID: "d6a2f5f7-e711-48ad-9455-4c9591d751a4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.805239 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d6a2f5f7-e711-48ad-9455-4c9591d751a4" (UID: "d6a2f5f7-e711-48ad-9455-4c9591d751a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.812638 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-config" (OuterVolumeSpecName: "config") pod "d6a2f5f7-e711-48ad-9455-4c9591d751a4" (UID: "d6a2f5f7-e711-48ad-9455-4c9591d751a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.821754 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6a2f5f7-e711-48ad-9455-4c9591d751a4" (UID: "d6a2f5f7-e711-48ad-9455-4c9591d751a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.822404 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d6a2f5f7-e711-48ad-9455-4c9591d751a4" (UID: "d6a2f5f7-e711-48ad-9455-4c9591d751a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.889477 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.889686 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.889696 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.889707 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxn82\" (UniqueName: \"kubernetes.io/projected/d6a2f5f7-e711-48ad-9455-4c9591d751a4-kube-api-access-kxn82\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.889717 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.889725 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.282407 4932 generic.go:334] "Generic (PLEG): container finished" podID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerID="a6a123d69d1a4e46268f089397aa0f2920ef5932f2828721d8716a7b45e1e942" exitCode=1 Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.282829 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerDied","Data":"a6a123d69d1a4e46268f089397aa0f2920ef5932f2828721d8716a7b45e1e942"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.282871 4932 scope.go:117] "RemoveContainer" containerID="4de093a700139da46d3f66815f5051f7a579a847fbfe3c9c9fef66a2d56e8e8c" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.283694 4932 scope.go:117] "RemoveContainer" containerID="a6a123d69d1a4e46268f089397aa0f2920ef5932f2828721d8716a7b45e1e942" Feb 18 19:54:24 crc kubenswrapper[4932]: E0218 19:54:24.283990 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0882c686-1b07-4ac7-a6be-148eff7faa19)\"" pod="openstack/watcher-decision-engine-0" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" Feb 18 19:54:24 crc kubenswrapper[4932]: W0218 19:54:24.288720 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebab9a68_9ab1_4d04_84ec_9f54b1e6e616.slice/crio-abf4fe1aeef8ebb3bf6d40f6b972486e6ef67f658fa19698f1bd32267dd142b9 WatchSource:0}: Error finding container abf4fe1aeef8ebb3bf6d40f6b972486e6ef67f658fa19698f1bd32267dd142b9: Status 404 returned error can't find the container with id abf4fe1aeef8ebb3bf6d40f6b972486e6ef67f658fa19698f1bd32267dd142b9 Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.291405 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-69669cb55f-sp2x2" event={"ID":"05e25333-fed2-4944-8c7e-151c0bd6ab6c","Type":"ContainerStarted","Data":"72119b86a375ae2c811dba508a69261ce4b7198d03c91fdf10bcd82e870b617f"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.305320 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" event={"ID":"6b01fc13-e894-46fd-8f24-d9ccdbce09e0","Type":"ContainerStarted","Data":"320475adc150dd4aac637b0e3a86249fd4a7cd7866314ced895ac0f66a8016d7"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.307082 4932 generic.go:334] "Generic (PLEG): container finished" podID="2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" containerID="780645213b3d6509f2ae57179e0c778515aafd9091ab4640640a6222945146e1" exitCode=0 Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.307158 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b589695-vqq6h" event={"ID":"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea","Type":"ContainerDied","Data":"780645213b3d6509f2ae57179e0c778515aafd9091ab4640640a6222945146e1"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.310202 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-855cb46c75-kwghr"] Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.310237 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b589695-vqq6h" event={"ID":"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea","Type":"ContainerStarted","Data":"2544e4221da69a4d78f85e7f0d63e78abed137750dbac8c78c132c0ee3b4a87d"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.324682 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.330610 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5966846f96-hbrsw" event={"ID":"fb1c0405-2770-4a03-ba51-c78005d57ad9","Type":"ContainerStarted","Data":"8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.338666 4932 generic.go:334] "Generic (PLEG): container finished" podID="dec0e208-2bfc-4661-8395-c56418bb9307" containerID="c14c2db9c2e97146ded5c1be64f375a20e4d3dc8027f2eb556b8226700b572e9" exitCode=0 Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.338755 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75df984768-5mv9k" event={"ID":"dec0e208-2bfc-4661-8395-c56418bb9307","Type":"ContainerDied","Data":"c14c2db9c2e97146ded5c1be64f375a20e4d3dc8027f2eb556b8226700b572e9"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.340810 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" event={"ID":"d6a2f5f7-e711-48ad-9455-4c9591d751a4","Type":"ContainerDied","Data":"80607fa7dffc51679b1e994f65af38dcef63e402a63bb96a1efa6d78960754ca"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.340893 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.459296 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:24 crc kubenswrapper[4932]: W0218 19:54:24.493509 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08fb57b1_f237_4913_8897_a21202273268.slice/crio-7e142471735b8d8ede9ef15b6d6b2ffab0ed91de871dc5c56cbedc4c3564c6af WatchSource:0}: Error finding container 7e142471735b8d8ede9ef15b6d6b2ffab0ed91de871dc5c56cbedc4c3564c6af: Status 404 returned error can't find the container with id 7e142471735b8d8ede9ef15b6d6b2ffab0ed91de871dc5c56cbedc4c3564c6af Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.511865 4932 scope.go:117] "RemoveContainer" containerID="e2da060f7790ff93315a845bafd7b63811c1d420398b145f60768509ea598a27" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.607255 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d4f4bc8df-ddvv7"] Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.629652 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d4f4bc8df-ddvv7"] Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.809383 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.938420 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-config\") pod \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.938482 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svh7b\" (UniqueName: \"kubernetes.io/projected/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-kube-api-access-svh7b\") pod \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.938570 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-sb\") pod \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.938710 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-nb\") pod \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.938733 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-svc\") pod \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.938810 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-swift-storage-0\") pod \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.943979 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-kube-api-access-svh7b" (OuterVolumeSpecName: "kube-api-access-svh7b") pod "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" (UID: "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea"). InnerVolumeSpecName "kube-api-access-svh7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.964011 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" (UID: "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.967573 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-config" (OuterVolumeSpecName: "config") pod "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" (UID: "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.982002 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" (UID: "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.985629 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" (UID: "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.010757 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" (UID: "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.040844 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.040868 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.040876 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.040886 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.040895 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.040903 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svh7b\" (UniqueName: \"kubernetes.io/projected/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-kube-api-access-svh7b\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.109714 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.201435 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a2f5f7-e711-48ad-9455-4c9591d751a4" path="/var/lib/kubelet/pods/d6a2f5f7-e711-48ad-9455-4c9591d751a4/volumes" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.375143 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08fb57b1-f237-4913-8897-a21202273268","Type":"ContainerStarted","Data":"7e142471735b8d8ede9ef15b6d6b2ffab0ed91de871dc5c56cbedc4c3564c6af"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.380443 4932 generic.go:334] "Generic (PLEG): container finished" podID="079e3d7d-bd4f-4198-8606-95192a514c07" containerID="c7ee5732776c18a927c72c5ff1cc708a0c4c7cbb7be39c25d6f15f19eb006153" exitCode=0 Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.380519 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerDied","Data":"c7ee5732776c18a927c72c5ff1cc708a0c4c7cbb7be39c25d6f15f19eb006153"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.397577 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" event={"ID":"6b01fc13-e894-46fd-8f24-d9ccdbce09e0","Type":"ContainerStarted","Data":"5f663127e38a953cd3de0606d0bf65e582824975b52335bdeb26c0e4505ad974"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.409515 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.410264 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b589695-vqq6h" event={"ID":"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea","Type":"ContainerDied","Data":"2544e4221da69a4d78f85e7f0d63e78abed137750dbac8c78c132c0ee3b4a87d"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.410296 4932 scope.go:117] "RemoveContainer" containerID="780645213b3d6509f2ae57179e0c778515aafd9091ab4640640a6222945146e1" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.420703 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" podStartSLOduration=2.988441068 podStartE2EDuration="6.420688105s" podCreationTimestamp="2026-02-18 19:54:19 +0000 UTC" firstStartedPulling="2026-02-18 19:54:20.39007905 +0000 UTC m=+1223.972033895" lastFinishedPulling="2026-02-18 19:54:23.822326077 +0000 UTC m=+1227.404280932" observedRunningTime="2026-02-18 19:54:25.411302454 +0000 UTC m=+1228.993257289" watchObservedRunningTime="2026-02-18 19:54:25.420688105 +0000 UTC m=+1229.002642950" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.444151 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5966846f96-hbrsw" event={"ID":"fb1c0405-2770-4a03-ba51-c78005d57ad9","Type":"ContainerStarted","Data":"44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.444973 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.463358 4932 generic.go:334] "Generic (PLEG): container finished" podID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerID="b1963dc8bdedaa6e9c39260e4aa454ec9b1f122ff3e931be78b28e85782c2717" exitCode=0 Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.463448 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" event={"ID":"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616","Type":"ContainerDied","Data":"b1963dc8bdedaa6e9c39260e4aa454ec9b1f122ff3e931be78b28e85782c2717"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.463477 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" event={"ID":"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616","Type":"ContainerStarted","Data":"abf4fe1aeef8ebb3bf6d40f6b972486e6ef67f658fa19698f1bd32267dd142b9"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.498233 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697b589695-vqq6h"] Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.527220 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-697b589695-vqq6h"] Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.546604 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5966846f96-hbrsw" podStartSLOduration=5.546584565 podStartE2EDuration="5.546584565s" podCreationTimestamp="2026-02-18 19:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:25.524223894 +0000 UTC m=+1229.106178749" watchObservedRunningTime="2026-02-18 19:54:25.546584565 +0000 UTC m=+1229.128539400" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.560426 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-69669cb55f-sp2x2" event={"ID":"05e25333-fed2-4944-8c7e-151c0bd6ab6c","Type":"ContainerStarted","Data":"e72c600997093359029736ce17b3968d1b22e8dfb4825143cd4a61465c27edf8"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.590981 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30bd9d4f-e84f-4320-9057-80d3d53f7ebb","Type":"ContainerStarted","Data":"00c41dbe58ad3dc460e41a4f8f86809ef9204f330e62756cf3eed317cf475042"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.593880 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-69669cb55f-sp2x2" podStartSLOduration=3.183690635 podStartE2EDuration="6.593866939s" podCreationTimestamp="2026-02-18 19:54:19 +0000 UTC" firstStartedPulling="2026-02-18 19:54:20.387315312 +0000 UTC m=+1223.969270157" lastFinishedPulling="2026-02-18 19:54:23.797491596 +0000 UTC m=+1227.379446461" observedRunningTime="2026-02-18 19:54:25.591334857 +0000 UTC m=+1229.173289712" watchObservedRunningTime="2026-02-18 19:54:25.593866939 +0000 UTC m=+1229.175821784" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.643821 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.756936 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-sg-core-conf-yaml\") pod \"079e3d7d-bd4f-4198-8606-95192a514c07\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.756998 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-scripts\") pod \"079e3d7d-bd4f-4198-8606-95192a514c07\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.757141 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-log-httpd\") pod \"079e3d7d-bd4f-4198-8606-95192a514c07\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.757185 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgvx4\" (UniqueName: \"kubernetes.io/projected/079e3d7d-bd4f-4198-8606-95192a514c07-kube-api-access-xgvx4\") pod \"079e3d7d-bd4f-4198-8606-95192a514c07\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.757243 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-run-httpd\") pod \"079e3d7d-bd4f-4198-8606-95192a514c07\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.757270 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-combined-ca-bundle\") pod \"079e3d7d-bd4f-4198-8606-95192a514c07\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.757329 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-config-data\") pod \"079e3d7d-bd4f-4198-8606-95192a514c07\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.759438 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "079e3d7d-bd4f-4198-8606-95192a514c07" (UID: "079e3d7d-bd4f-4198-8606-95192a514c07"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.759508 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "079e3d7d-bd4f-4198-8606-95192a514c07" (UID: "079e3d7d-bd4f-4198-8606-95192a514c07"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.763309 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/079e3d7d-bd4f-4198-8606-95192a514c07-kube-api-access-xgvx4" (OuterVolumeSpecName: "kube-api-access-xgvx4") pod "079e3d7d-bd4f-4198-8606-95192a514c07" (UID: "079e3d7d-bd4f-4198-8606-95192a514c07"). InnerVolumeSpecName "kube-api-access-xgvx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.786049 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-scripts" (OuterVolumeSpecName: "scripts") pod "079e3d7d-bd4f-4198-8606-95192a514c07" (UID: "079e3d7d-bd4f-4198-8606-95192a514c07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.811277 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "079e3d7d-bd4f-4198-8606-95192a514c07" (UID: "079e3d7d-bd4f-4198-8606-95192a514c07"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.848194 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "079e3d7d-bd4f-4198-8606-95192a514c07" (UID: "079e3d7d-bd4f-4198-8606-95192a514c07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.867701 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.867740 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.867749 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.867758 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgvx4\" (UniqueName: \"kubernetes.io/projected/079e3d7d-bd4f-4198-8606-95192a514c07-kube-api-access-xgvx4\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.867768 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.867778 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.899025 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-config-data" (OuterVolumeSpecName: "config-data") pod "079e3d7d-bd4f-4198-8606-95192a514c07" (UID: "079e3d7d-bd4f-4198-8606-95192a514c07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.969794 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.112099 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.373998 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.615541 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" event={"ID":"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616","Type":"ContainerStarted","Data":"2dd4d65476d1505ac595577a77e37ccd6902dc5b61d39daf8b0813fba6426e5c"} Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.615813 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.618585 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08fb57b1-f237-4913-8897-a21202273268","Type":"ContainerStarted","Data":"826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb"} Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.622179 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerDied","Data":"fee032ffa8aa1dbfcab87d2f666d06dce9f00f11a46c1ed8dccaedd7a3ae0ea4"} Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.622252 4932 scope.go:117] "RemoveContainer" containerID="2b1e75d16cb30a9c6ffb3c5157c9587182f8a699106125481032c4efb8da098d" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.622412 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.626079 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30bd9d4f-e84f-4320-9057-80d3d53f7ebb","Type":"ContainerStarted","Data":"ef6f599a3d418b9b543c98edcdf9f0f0c968498f5968cc9b3a5e3260ba0ced73"} Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.626192 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30bd9d4f-e84f-4320-9057-80d3d53f7ebb","Type":"ContainerStarted","Data":"b586d7053bf31a7678ef91de08a0a0dd40541c6b68d24d82d104e9ca9533195b"} Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.626362 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api-log" containerID="cri-o://b586d7053bf31a7678ef91de08a0a0dd40541c6b68d24d82d104e9ca9533195b" gracePeriod=30 Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.626669 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.627034 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api" containerID="cri-o://ef6f599a3d418b9b543c98edcdf9f0f0c968498f5968cc9b3a5e3260ba0ced73" gracePeriod=30 Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.636667 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" podStartSLOduration=4.636647927 podStartE2EDuration="4.636647927s" podCreationTimestamp="2026-02-18 19:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:26.636240897 +0000 UTC m=+1230.218195742" watchObservedRunningTime="2026-02-18 19:54:26.636647927 +0000 UTC m=+1230.218602772" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.688594 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.688575036 podStartE2EDuration="4.688575036s" podCreationTimestamp="2026-02-18 19:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:26.661489389 +0000 UTC m=+1230.243444224" watchObservedRunningTime="2026-02-18 19:54:26.688575036 +0000 UTC m=+1230.270529881" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.716254 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.722672 4932 scope.go:117] "RemoveContainer" containerID="41bbf0004efdd834aa04334d49096cca68b41c5d9f117836f2c8dc6fe6f5d5be" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.725411 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736188 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:26 crc kubenswrapper[4932]: E0218 19:54:26.736546 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a2f5f7-e711-48ad-9455-4c9591d751a4" containerName="init" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736565 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a2f5f7-e711-48ad-9455-4c9591d751a4" containerName="init" Feb 18 19:54:26 crc kubenswrapper[4932]: E0218 19:54:26.736584 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="ceilometer-notification-agent" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736592 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="ceilometer-notification-agent" Feb 18 19:54:26 crc kubenswrapper[4932]: E0218 19:54:26.736605 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="proxy-httpd" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736611 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="proxy-httpd" Feb 18 19:54:26 crc kubenswrapper[4932]: E0218 19:54:26.736620 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" containerName="init" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736626 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" containerName="init" Feb 18 19:54:26 crc kubenswrapper[4932]: E0218 19:54:26.736644 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="sg-core" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736649 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="sg-core" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736836 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a2f5f7-e711-48ad-9455-4c9591d751a4" containerName="init" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736853 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="ceilometer-notification-agent" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736865 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" containerName="init" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736884 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="sg-core" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736902 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="proxy-httpd" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.739223 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.741586 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.741865 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.773529 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75df984768-5mv9k" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.776952 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.806229 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-run-httpd\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.806303 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-log-httpd\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.806324 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-config-data\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.806351 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.806412 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5dzv\" (UniqueName: \"kubernetes.io/projected/f81248d0-bf30-4447-ad78-7bfe9048bbea-kube-api-access-x5dzv\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.806431 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-scripts\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.806464 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.867035 4932 scope.go:117] "RemoveContainer" containerID="c7ee5732776c18a927c72c5ff1cc708a0c4c7cbb7be39c25d6f15f19eb006153" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.907758 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.907876 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-run-httpd\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.907952 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-log-httpd\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.907983 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-config-data\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.908019 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.908109 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5dzv\" (UniqueName: \"kubernetes.io/projected/f81248d0-bf30-4447-ad78-7bfe9048bbea-kube-api-access-x5dzv\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.908140 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-scripts\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.909194 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-log-httpd\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.910135 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-run-httpd\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.913866 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.914015 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.914741 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-scripts\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.916748 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-config-data\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.935590 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5dzv\" (UniqueName: \"kubernetes.io/projected/f81248d0-bf30-4447-ad78-7bfe9048bbea-kube-api-access-x5dzv\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.158670 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.204115 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" path="/var/lib/kubelet/pods/079e3d7d-bd4f-4198-8606-95192a514c07/volumes" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.211389 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" path="/var/lib/kubelet/pods/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea/volumes" Feb 18 19:54:27 crc kubenswrapper[4932]: E0218 19:54:27.461298 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:27 crc kubenswrapper[4932]: E0218 19:54:27.471096 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:27 crc kubenswrapper[4932]: E0218 19:54:27.481361 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:27 crc kubenswrapper[4932]: E0218 19:54:27.481456 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.677532 4932 generic.go:334] "Generic (PLEG): container finished" podID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerID="f5769d60f6e01bf4316e0a1d5902b22aaf988b784a78ee3cc62feeec1f37553a" exitCode=137 Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.677568 4932 generic.go:334] "Generic (PLEG): container finished" podID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerID="c85e580ee020727173d28e445621bbce2289b58bcee15597e5fb5350c78183fd" exitCode=137 Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.677580 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4cfbdb9c-hwmr5" event={"ID":"4938c577-60aa-45c3-9190-b6e82bcf8b0d","Type":"ContainerDied","Data":"f5769d60f6e01bf4316e0a1d5902b22aaf988b784a78ee3cc62feeec1f37553a"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.677654 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4cfbdb9c-hwmr5" event={"ID":"4938c577-60aa-45c3-9190-b6e82bcf8b0d","Type":"ContainerDied","Data":"c85e580ee020727173d28e445621bbce2289b58bcee15597e5fb5350c78183fd"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.680686 4932 generic.go:334] "Generic (PLEG): container finished" podID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerID="b586d7053bf31a7678ef91de08a0a0dd40541c6b68d24d82d104e9ca9533195b" exitCode=143 Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.680740 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30bd9d4f-e84f-4320-9057-80d3d53f7ebb","Type":"ContainerDied","Data":"b586d7053bf31a7678ef91de08a0a0dd40541c6b68d24d82d104e9ca9533195b"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.683425 4932 generic.go:334] "Generic (PLEG): container finished" podID="a620c48b-58fa-487f-8997-e2784ddc497b" containerID="e80cdd4378af4ac5d4d707a290fa639025fc55be34fd9af1c68a0bd06a7b10c3" exitCode=137 Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.683448 4932 generic.go:334] "Generic (PLEG): container finished" podID="a620c48b-58fa-487f-8997-e2784ddc497b" containerID="97ecd324c61720be922083172bc1852b964c2ee86274e593e6ab59deb4006699" exitCode=137 Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.683484 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67874d8bd5-ff7xc" event={"ID":"a620c48b-58fa-487f-8997-e2784ddc497b","Type":"ContainerDied","Data":"e80cdd4378af4ac5d4d707a290fa639025fc55be34fd9af1c68a0bd06a7b10c3"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.683505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67874d8bd5-ff7xc" event={"ID":"a620c48b-58fa-487f-8997-e2784ddc497b","Type":"ContainerDied","Data":"97ecd324c61720be922083172bc1852b964c2ee86274e593e6ab59deb4006699"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.710688 4932 generic.go:334] "Generic (PLEG): container finished" podID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerID="a824fe0a64ae9746970f5bc8a389ffaa0e7b9eacf3d8dea3f2ebb12195def55c" exitCode=137 Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.710731 4932 generic.go:334] "Generic (PLEG): container finished" podID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerID="80df06be1d2a603214b5aa7b38525d904a38b1052555a7f95c74bc71722c9961" exitCode=137 Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.710810 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-644d9bbcf7-chs9h" event={"ID":"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf","Type":"ContainerDied","Data":"a824fe0a64ae9746970f5bc8a389ffaa0e7b9eacf3d8dea3f2ebb12195def55c"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.710842 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-644d9bbcf7-chs9h" event={"ID":"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf","Type":"ContainerDied","Data":"80df06be1d2a603214b5aa7b38525d904a38b1052555a7f95c74bc71722c9961"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.713241 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08fb57b1-f237-4913-8897-a21202273268","Type":"ContainerStarted","Data":"f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.739648 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.739680 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.740520 4932 scope.go:117] "RemoveContainer" containerID="a6a123d69d1a4e46268f089397aa0f2920ef5932f2828721d8716a7b45e1e942" Feb 18 19:54:27 crc kubenswrapper[4932]: E0218 19:54:27.740732 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0882c686-1b07-4ac7-a6be-148eff7faa19)\"" pod="openstack/watcher-decision-engine-0" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.765409 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.201338976 podStartE2EDuration="6.765383276s" podCreationTimestamp="2026-02-18 19:54:21 +0000 UTC" firstStartedPulling="2026-02-18 19:54:24.511287361 +0000 UTC m=+1228.093242196" lastFinishedPulling="2026-02-18 19:54:25.075331651 +0000 UTC m=+1228.657286496" observedRunningTime="2026-02-18 19:54:27.740334815 +0000 UTC m=+1231.322289670" watchObservedRunningTime="2026-02-18 19:54:27.765383276 +0000 UTC m=+1231.347338141" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.894421 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.960541 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-config-data\") pod \"a620c48b-58fa-487f-8997-e2784ddc497b\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.960755 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a620c48b-58fa-487f-8997-e2784ddc497b-horizon-secret-key\") pod \"a620c48b-58fa-487f-8997-e2784ddc497b\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.960811 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfb47\" (UniqueName: \"kubernetes.io/projected/a620c48b-58fa-487f-8997-e2784ddc497b-kube-api-access-lfb47\") pod \"a620c48b-58fa-487f-8997-e2784ddc497b\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.960832 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-scripts\") pod \"a620c48b-58fa-487f-8997-e2784ddc497b\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.960870 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a620c48b-58fa-487f-8997-e2784ddc497b-logs\") pod \"a620c48b-58fa-487f-8997-e2784ddc497b\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.981746 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a620c48b-58fa-487f-8997-e2784ddc497b-kube-api-access-lfb47" (OuterVolumeSpecName: "kube-api-access-lfb47") pod "a620c48b-58fa-487f-8997-e2784ddc497b" (UID: "a620c48b-58fa-487f-8997-e2784ddc497b"). InnerVolumeSpecName "kube-api-access-lfb47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.982364 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a620c48b-58fa-487f-8997-e2784ddc497b-logs" (OuterVolumeSpecName: "logs") pod "a620c48b-58fa-487f-8997-e2784ddc497b" (UID: "a620c48b-58fa-487f-8997-e2784ddc497b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.996985 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a620c48b-58fa-487f-8997-e2784ddc497b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a620c48b-58fa-487f-8997-e2784ddc497b" (UID: "a620c48b-58fa-487f-8997-e2784ddc497b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.004476 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.020979 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-config-data" (OuterVolumeSpecName: "config-data") pod "a620c48b-58fa-487f-8997-e2784ddc497b" (UID: "a620c48b-58fa-487f-8997-e2784ddc497b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.058664 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-scripts" (OuterVolumeSpecName: "scripts") pod "a620c48b-58fa-487f-8997-e2784ddc497b" (UID: "a620c48b-58fa-487f-8997-e2784ddc497b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.094119 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.094152 4932 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a620c48b-58fa-487f-8997-e2784ddc497b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.094163 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfb47\" (UniqueName: \"kubernetes.io/projected/a620c48b-58fa-487f-8997-e2784ddc497b-kube-api-access-lfb47\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.094184 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.094193 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a620c48b-58fa-487f-8997-e2784ddc497b-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.435240 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.464075 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604013 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-config-data\") pod \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604341 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4938c577-60aa-45c3-9190-b6e82bcf8b0d-logs\") pod \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604397 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-logs\") pod \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604480 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-horizon-secret-key\") pod \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604525 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-scripts\") pod \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604568 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-scripts\") pod \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604598 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62ssm\" (UniqueName: \"kubernetes.io/projected/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-kube-api-access-62ssm\") pod \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604626 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4938c577-60aa-45c3-9190-b6e82bcf8b0d-logs" (OuterVolumeSpecName: "logs") pod "4938c577-60aa-45c3-9190-b6e82bcf8b0d" (UID: "4938c577-60aa-45c3-9190-b6e82bcf8b0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604646 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rph76\" (UniqueName: \"kubernetes.io/projected/4938c577-60aa-45c3-9190-b6e82bcf8b0d-kube-api-access-rph76\") pod \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604679 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4938c577-60aa-45c3-9190-b6e82bcf8b0d-horizon-secret-key\") pod \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604700 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-config-data\") pod \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604767 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-logs" (OuterVolumeSpecName: "logs") pod "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" (UID: "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.605137 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4938c577-60aa-45c3-9190-b6e82bcf8b0d-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.605158 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.612915 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-kube-api-access-62ssm" (OuterVolumeSpecName: "kube-api-access-62ssm") pod "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" (UID: "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf"). InnerVolumeSpecName "kube-api-access-62ssm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.623700 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4938c577-60aa-45c3-9190-b6e82bcf8b0d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4938c577-60aa-45c3-9190-b6e82bcf8b0d" (UID: "4938c577-60aa-45c3-9190-b6e82bcf8b0d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.623857 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4938c577-60aa-45c3-9190-b6e82bcf8b0d-kube-api-access-rph76" (OuterVolumeSpecName: "kube-api-access-rph76") pod "4938c577-60aa-45c3-9190-b6e82bcf8b0d" (UID: "4938c577-60aa-45c3-9190-b6e82bcf8b0d"). InnerVolumeSpecName "kube-api-access-rph76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.626185 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" (UID: "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.637722 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-scripts" (OuterVolumeSpecName: "scripts") pod "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" (UID: "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.659412 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-scripts" (OuterVolumeSpecName: "scripts") pod "4938c577-60aa-45c3-9190-b6e82bcf8b0d" (UID: "4938c577-60aa-45c3-9190-b6e82bcf8b0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.679822 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-config-data" (OuterVolumeSpecName: "config-data") pod "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" (UID: "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.685787 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-config-data" (OuterVolumeSpecName: "config-data") pod "4938c577-60aa-45c3-9190-b6e82bcf8b0d" (UID: "4938c577-60aa-45c3-9190-b6e82bcf8b0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721080 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rph76\" (UniqueName: \"kubernetes.io/projected/4938c577-60aa-45c3-9190-b6e82bcf8b0d-kube-api-access-rph76\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721135 4932 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4938c577-60aa-45c3-9190-b6e82bcf8b0d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721146 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721155 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721166 4932 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721252 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721261 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721269 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62ssm\" (UniqueName: \"kubernetes.io/projected/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-kube-api-access-62ssm\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725351 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-57c4489bcf-qchgn"] Feb 18 19:54:28 crc kubenswrapper[4932]: E0218 19:54:28.725707 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725723 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: E0218 19:54:28.725742 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725749 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: E0218 19:54:28.725765 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725771 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: E0218 19:54:28.725788 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725794 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: E0218 19:54:28.725802 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725808 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: E0218 19:54:28.725817 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725823 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725989 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.726006 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.726016 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.726032 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.726043 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.726049 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.728996 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.733493 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.734110 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.761470 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-644d9bbcf7-chs9h" event={"ID":"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf","Type":"ContainerDied","Data":"64958cb64aa641fc969187f742c63571ece0fcc99f90f916c984ba259dcd59e7"} Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.761520 4932 scope.go:117] "RemoveContainer" containerID="a824fe0a64ae9746970f5bc8a389ffaa0e7b9eacf3d8dea3f2ebb12195def55c" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.761646 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.778218 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57c4489bcf-qchgn"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.786774 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerStarted","Data":"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048"} Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.786905 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerStarted","Data":"bcdb72bd174613995404d4a92c415ade81bee3dfae5093758e2e4468047c8e5f"} Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.802505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4cfbdb9c-hwmr5" event={"ID":"4938c577-60aa-45c3-9190-b6e82bcf8b0d","Type":"ContainerDied","Data":"449b65cc6eee0acc18bb77293bfac087ad9d12fb9f06318dfdbe198587c35eda"} Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.802589 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.823914 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-config\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.824199 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-combined-ca-bundle\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.824332 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-httpd-config\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.824491 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-internal-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.824636 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-ovndb-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.824747 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-public-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.825059 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjt2s\" (UniqueName: \"kubernetes.io/projected/91d8a414-576a-4c50-990e-3daa2724ecb1-kube-api-access-rjt2s\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.827306 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-644d9bbcf7-chs9h"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.829334 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.829474 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67874d8bd5-ff7xc" event={"ID":"a620c48b-58fa-487f-8997-e2784ddc497b","Type":"ContainerDied","Data":"3db1ad470af452257972c4a5c8d1fb2ee8875e24f72fe068e89046c3a5a557ce"} Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.862102 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-644d9bbcf7-chs9h"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.884147 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b4cfbdb9c-hwmr5"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.900750 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b4cfbdb9c-hwmr5"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.913233 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67874d8bd5-ff7xc"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.926412 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-67874d8bd5-ff7xc"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.927000 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-ovndb-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.927038 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-public-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.927060 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjt2s\" (UniqueName: \"kubernetes.io/projected/91d8a414-576a-4c50-990e-3daa2724ecb1-kube-api-access-rjt2s\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.927229 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-config\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.927259 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-combined-ca-bundle\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.927279 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-httpd-config\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.927323 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-internal-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.941914 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-internal-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.942544 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-combined-ca-bundle\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.943160 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-config\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.947059 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-ovndb-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.950909 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-httpd-config\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.957694 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjt2s\" (UniqueName: \"kubernetes.io/projected/91d8a414-576a-4c50-990e-3daa2724ecb1-kube-api-access-rjt2s\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.965761 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-public-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.005205 4932 scope.go:117] "RemoveContainer" containerID="80df06be1d2a603214b5aa7b38525d904a38b1052555a7f95c74bc71722c9961" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.038835 4932 scope.go:117] "RemoveContainer" containerID="f5769d60f6e01bf4316e0a1d5902b22aaf988b784a78ee3cc62feeec1f37553a" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.077623 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.189973 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" path="/var/lib/kubelet/pods/4938c577-60aa-45c3-9190-b6e82bcf8b0d/volumes" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.190659 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" path="/var/lib/kubelet/pods/a620c48b-58fa-487f-8997-e2784ddc497b/volumes" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.191780 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" path="/var/lib/kubelet/pods/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf/volumes" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.225406 4932 scope.go:117] "RemoveContainer" containerID="c85e580ee020727173d28e445621bbce2289b58bcee15597e5fb5350c78183fd" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.332911 4932 scope.go:117] "RemoveContainer" containerID="e80cdd4378af4ac5d4d707a290fa639025fc55be34fd9af1c68a0bd06a7b10c3" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.671320 4932 scope.go:117] "RemoveContainer" containerID="97ecd324c61720be922083172bc1852b964c2ee86274e593e6ab59deb4006699" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.871002 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerStarted","Data":"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41"} Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.907266 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57c4489bcf-qchgn"] Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.022707 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8557cf8c94-8d7qp"] Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.025154 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.049927 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.051031 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.069353 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8557cf8c94-8d7qp"] Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.112687 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.160759 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.193704 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-public-tls-certs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.193811 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-config-data\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.193917 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-config-data-custom\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.193948 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-combined-ca-bundle\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.193967 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e104e849-d054-4208-8b93-823e82c2627f-logs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.194009 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz6vw\" (UniqueName: \"kubernetes.io/projected/e104e849-d054-4208-8b93-823e82c2627f-kube-api-access-cz6vw\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.194031 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-internal-tls-certs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296120 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-config-data-custom\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296185 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-combined-ca-bundle\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296206 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e104e849-d054-4208-8b93-823e82c2627f-logs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296243 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz6vw\" (UniqueName: \"kubernetes.io/projected/e104e849-d054-4208-8b93-823e82c2627f-kube-api-access-cz6vw\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296265 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-internal-tls-certs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296359 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-public-tls-certs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296390 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-config-data\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296652 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e104e849-d054-4208-8b93-823e82c2627f-logs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.303919 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-internal-tls-certs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.305006 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-public-tls-certs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.305068 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-config-data-custom\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.313693 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-combined-ca-bundle\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.314656 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-config-data\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.330835 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz6vw\" (UniqueName: \"kubernetes.io/projected/e104e849-d054-4208-8b93-823e82c2627f-kube-api-access-cz6vw\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.432072 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.726791 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.895127 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerStarted","Data":"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09"} Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.899399 4932 generic.go:334] "Generic (PLEG): container finished" podID="5bd90883-79db-4903-87ab-828b9608f9fa" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" exitCode=137 Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.899497 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.900234 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5bd90883-79db-4903-87ab-828b9608f9fa","Type":"ContainerDied","Data":"fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746"} Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.900255 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5bd90883-79db-4903-87ab-828b9608f9fa","Type":"ContainerDied","Data":"df7e1feb306b3e43a9f10b16516d4c855aa78c2e70283552aa8d3546e3dee111"} Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.900272 4932 scope.go:117] "RemoveContainer" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.904769 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57c4489bcf-qchgn" event={"ID":"91d8a414-576a-4c50-990e-3daa2724ecb1","Type":"ContainerStarted","Data":"d2b6d0488b17d213b7573339849794ce994907ae0e84f1dc6b70e23a24945529"} Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.904807 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57c4489bcf-qchgn" event={"ID":"91d8a414-576a-4c50-990e-3daa2724ecb1","Type":"ContainerStarted","Data":"2f36790b4e38abbd8d80704e32d6588bbe88a6a1a64652a37c07a7528178cd51"} Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.914669 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd90883-79db-4903-87ab-828b9608f9fa-logs\") pod \"5bd90883-79db-4903-87ab-828b9608f9fa\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.915486 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-config-data\") pod \"5bd90883-79db-4903-87ab-828b9608f9fa\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.915583 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-combined-ca-bundle\") pod \"5bd90883-79db-4903-87ab-828b9608f9fa\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.915717 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkk7w\" (UniqueName: \"kubernetes.io/projected/5bd90883-79db-4903-87ab-828b9608f9fa-kube-api-access-jkk7w\") pod \"5bd90883-79db-4903-87ab-828b9608f9fa\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.915921 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bd90883-79db-4903-87ab-828b9608f9fa-logs" (OuterVolumeSpecName: "logs") pod "5bd90883-79db-4903-87ab-828b9608f9fa" (UID: "5bd90883-79db-4903-87ab-828b9608f9fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.917810 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.918337 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd90883-79db-4903-87ab-828b9608f9fa-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.923877 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd90883-79db-4903-87ab-828b9608f9fa-kube-api-access-jkk7w" (OuterVolumeSpecName: "kube-api-access-jkk7w") pod "5bd90883-79db-4903-87ab-828b9608f9fa" (UID: "5bd90883-79db-4903-87ab-828b9608f9fa"). InnerVolumeSpecName "kube-api-access-jkk7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.951651 4932 scope.go:117] "RemoveContainer" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" Feb 18 19:54:30 crc kubenswrapper[4932]: E0218 19:54:30.954901 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746\": container with ID starting with fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746 not found: ID does not exist" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.955107 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746"} err="failed to get container status \"fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746\": rpc error: code = NotFound desc = could not find container \"fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746\": container with ID starting with fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746 not found: ID does not exist" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.981323 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bd90883-79db-4903-87ab-828b9608f9fa" (UID: "5bd90883-79db-4903-87ab-828b9608f9fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.001487 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-config-data" (OuterVolumeSpecName: "config-data") pod "5bd90883-79db-4903-87ab-828b9608f9fa" (UID: "5bd90883-79db-4903-87ab-828b9608f9fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.022342 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.022381 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.022393 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkk7w\" (UniqueName: \"kubernetes.io/projected/5bd90883-79db-4903-87ab-828b9608f9fa-kube-api-access-jkk7w\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.222144 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8557cf8c94-8d7qp"] Feb 18 19:54:31 crc kubenswrapper[4932]: W0218 19:54:31.237745 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode104e849_d054_4208_8b93_823e82c2627f.slice/crio-9aceefcd1fa71ec3be26bb488250662ae843e663d9ec3de27ded1b3b424ffcc9 WatchSource:0}: Error finding container 9aceefcd1fa71ec3be26bb488250662ae843e663d9ec3de27ded1b3b424ffcc9: Status 404 returned error can't find the container with id 9aceefcd1fa71ec3be26bb488250662ae843e663d9ec3de27ded1b3b424ffcc9 Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.933822 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57c4489bcf-qchgn" event={"ID":"91d8a414-576a-4c50-990e-3daa2724ecb1","Type":"ContainerStarted","Data":"16c1d684fe935ad4f6a91d59e272a507b07fb7acf4d9f7cbf831c127b7702151"} Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.934194 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.941021 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8557cf8c94-8d7qp" event={"ID":"e104e849-d054-4208-8b93-823e82c2627f","Type":"ContainerStarted","Data":"9cd2acd89dcd9fd17bf09fce3fbd1b66e182a4bc1dde9c06a01f7196afec0550"} Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.941069 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8557cf8c94-8d7qp" event={"ID":"e104e849-d054-4208-8b93-823e82c2627f","Type":"ContainerStarted","Data":"59f2d3398355a74dd1ca4ccdfb932421d1f49c78319ed7d65537d34c0a82a39e"} Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.941082 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8557cf8c94-8d7qp" event={"ID":"e104e849-d054-4208-8b93-823e82c2627f","Type":"ContainerStarted","Data":"9aceefcd1fa71ec3be26bb488250662ae843e663d9ec3de27ded1b3b424ffcc9"} Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.941098 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.941124 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.990332 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8557cf8c94-8d7qp" podStartSLOduration=2.990308771 podStartE2EDuration="2.990308771s" podCreationTimestamp="2026-02-18 19:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:31.988717902 +0000 UTC m=+1235.570672767" watchObservedRunningTime="2026-02-18 19:54:31.990308771 +0000 UTC m=+1235.572263616" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.991408 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-57c4489bcf-qchgn" podStartSLOduration=3.991399818 podStartE2EDuration="3.991399818s" podCreationTimestamp="2026-02-18 19:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:31.966589463 +0000 UTC m=+1235.548544308" watchObservedRunningTime="2026-02-18 19:54:31.991399818 +0000 UTC m=+1235.573354663" Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.169148 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.433709 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.512907 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.707518 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.785363 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.844589 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b855db8f7-mh8jh"] Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.844839 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerName="dnsmasq-dns" containerID="cri-o://2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d" gracePeriod=10 Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.968284 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerStarted","Data":"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c"} Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.970525 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.979065 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.159:5353: connect: connection refused" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.004744 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.05723253 podStartE2EDuration="7.004723726s" podCreationTimestamp="2026-02-18 19:54:26 +0000 UTC" firstStartedPulling="2026-02-18 19:54:28.081835163 +0000 UTC m=+1231.663790008" lastFinishedPulling="2026-02-18 19:54:32.029326359 +0000 UTC m=+1235.611281204" observedRunningTime="2026-02-18 19:54:32.992980965 +0000 UTC m=+1236.574935820" watchObservedRunningTime="2026-02-18 19:54:33.004723726 +0000 UTC m=+1236.586678571" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.086925 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.450121 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.585086 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d668s\" (UniqueName: \"kubernetes.io/projected/0affb7f8-ebd4-4d8d-b41c-dd968316038d-kube-api-access-d668s\") pod \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.585298 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-svc\") pod \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.585345 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-sb\") pod \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.585371 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-swift-storage-0\") pod \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.585438 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-config\") pod \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.585482 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-nb\") pod \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.592623 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0affb7f8-ebd4-4d8d-b41c-dd968316038d-kube-api-access-d668s" (OuterVolumeSpecName: "kube-api-access-d668s") pod "0affb7f8-ebd4-4d8d-b41c-dd968316038d" (UID: "0affb7f8-ebd4-4d8d-b41c-dd968316038d"). InnerVolumeSpecName "kube-api-access-d668s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.650345 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0affb7f8-ebd4-4d8d-b41c-dd968316038d" (UID: "0affb7f8-ebd4-4d8d-b41c-dd968316038d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.653704 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0affb7f8-ebd4-4d8d-b41c-dd968316038d" (UID: "0affb7f8-ebd4-4d8d-b41c-dd968316038d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.655502 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0affb7f8-ebd4-4d8d-b41c-dd968316038d" (UID: "0affb7f8-ebd4-4d8d-b41c-dd968316038d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.668513 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0affb7f8-ebd4-4d8d-b41c-dd968316038d" (UID: "0affb7f8-ebd4-4d8d-b41c-dd968316038d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.689552 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.689587 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.689599 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.689609 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.689619 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d668s\" (UniqueName: \"kubernetes.io/projected/0affb7f8-ebd4-4d8d-b41c-dd968316038d-kube-api-access-d668s\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.703799 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-config" (OuterVolumeSpecName: "config") pod "0affb7f8-ebd4-4d8d-b41c-dd968316038d" (UID: "0affb7f8-ebd4-4d8d-b41c-dd968316038d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.791640 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.979880 4932 generic.go:334] "Generic (PLEG): container finished" podID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerID="2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d" exitCode=0 Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.979941 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.979985 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" event={"ID":"0affb7f8-ebd4-4d8d-b41c-dd968316038d","Type":"ContainerDied","Data":"2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d"} Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.980049 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" event={"ID":"0affb7f8-ebd4-4d8d-b41c-dd968316038d","Type":"ContainerDied","Data":"58e783f05bfc925c4081556f019c7c54bdb33f3d7590e9cb651eb5ff2a823274"} Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.980079 4932 scope.go:117] "RemoveContainer" containerID="2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.980845 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="cinder-scheduler" containerID="cri-o://826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb" gracePeriod=30 Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.981103 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="probe" containerID="cri-o://f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7" gracePeriod=30 Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.029621 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b855db8f7-mh8jh"] Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.039043 4932 scope.go:117] "RemoveContainer" containerID="6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4" Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.042154 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b855db8f7-mh8jh"] Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.064495 4932 scope.go:117] "RemoveContainer" containerID="2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d" Feb 18 19:54:34 crc kubenswrapper[4932]: E0218 19:54:34.065130 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d\": container with ID starting with 2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d not found: ID does not exist" containerID="2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d" Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.065185 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d"} err="failed to get container status \"2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d\": rpc error: code = NotFound desc = could not find container \"2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d\": container with ID starting with 2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d not found: ID does not exist" Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.065211 4932 scope.go:117] "RemoveContainer" containerID="6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4" Feb 18 19:54:34 crc kubenswrapper[4932]: E0218 19:54:34.065505 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4\": container with ID starting with 6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4 not found: ID does not exist" containerID="6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4" Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.065559 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4"} err="failed to get container status \"6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4\": rpc error: code = NotFound desc = could not find container \"6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4\": container with ID starting with 6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4 not found: ID does not exist" Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.993732 4932 generic.go:334] "Generic (PLEG): container finished" podID="08fb57b1-f237-4913-8897-a21202273268" containerID="f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7" exitCode=0 Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.994885 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08fb57b1-f237-4913-8897-a21202273268","Type":"ContainerDied","Data":"f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7"} Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.189890 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" path="/var/lib/kubelet/pods/0affb7f8-ebd4-4d8d-b41c-dd968316038d/volumes" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.640936 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.734568 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-combined-ca-bundle\") pod \"08fb57b1-f237-4913-8897-a21202273268\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.734638 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfcnj\" (UniqueName: \"kubernetes.io/projected/08fb57b1-f237-4913-8897-a21202273268-kube-api-access-lfcnj\") pod \"08fb57b1-f237-4913-8897-a21202273268\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.734822 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data\") pod \"08fb57b1-f237-4913-8897-a21202273268\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.734853 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08fb57b1-f237-4913-8897-a21202273268-etc-machine-id\") pod \"08fb57b1-f237-4913-8897-a21202273268\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.734913 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-scripts\") pod \"08fb57b1-f237-4913-8897-a21202273268\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.734946 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data-custom\") pod \"08fb57b1-f237-4913-8897-a21202273268\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.735682 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08fb57b1-f237-4913-8897-a21202273268-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "08fb57b1-f237-4913-8897-a21202273268" (UID: "08fb57b1-f237-4913-8897-a21202273268"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.737039 4932 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08fb57b1-f237-4913-8897-a21202273268-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.742014 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08fb57b1-f237-4913-8897-a21202273268-kube-api-access-lfcnj" (OuterVolumeSpecName: "kube-api-access-lfcnj") pod "08fb57b1-f237-4913-8897-a21202273268" (UID: "08fb57b1-f237-4913-8897-a21202273268"). InnerVolumeSpecName "kube-api-access-lfcnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.744272 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-scripts" (OuterVolumeSpecName: "scripts") pod "08fb57b1-f237-4913-8897-a21202273268" (UID: "08fb57b1-f237-4913-8897-a21202273268"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.744818 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "08fb57b1-f237-4913-8897-a21202273268" (UID: "08fb57b1-f237-4913-8897-a21202273268"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.795236 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08fb57b1-f237-4913-8897-a21202273268" (UID: "08fb57b1-f237-4913-8897-a21202273268"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.831013 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data" (OuterVolumeSpecName: "config-data") pod "08fb57b1-f237-4913-8897-a21202273268" (UID: "08fb57b1-f237-4913-8897-a21202273268"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.839598 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.839637 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.839649 4932 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.839669 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.839681 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfcnj\" (UniqueName: \"kubernetes.io/projected/08fb57b1-f237-4913-8897-a21202273268-kube-api-access-lfcnj\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.900372 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.022639 4932 generic.go:334] "Generic (PLEG): container finished" podID="08fb57b1-f237-4913-8897-a21202273268" containerID="826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb" exitCode=0 Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.022696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08fb57b1-f237-4913-8897-a21202273268","Type":"ContainerDied","Data":"826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb"} Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.022702 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.022728 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08fb57b1-f237-4913-8897-a21202273268","Type":"ContainerDied","Data":"7e142471735b8d8ede9ef15b6d6b2ffab0ed91de871dc5c56cbedc4c3564c6af"} Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.022746 4932 scope.go:117] "RemoveContainer" containerID="f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.135139 4932 scope.go:117] "RemoveContainer" containerID="826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.139270 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.153303 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.192359 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:36 crc kubenswrapper[4932]: E0218 19:54:36.192861 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="cinder-scheduler" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.192881 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="cinder-scheduler" Feb 18 19:54:36 crc kubenswrapper[4932]: E0218 19:54:36.192892 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerName="init" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.192918 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerName="init" Feb 18 19:54:36 crc kubenswrapper[4932]: E0218 19:54:36.192935 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="probe" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.192941 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="probe" Feb 18 19:54:36 crc kubenswrapper[4932]: E0218 19:54:36.192953 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.192958 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:36 crc kubenswrapper[4932]: E0218 19:54:36.192967 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerName="dnsmasq-dns" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.192973 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerName="dnsmasq-dns" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.193149 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerName="dnsmasq-dns" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.193184 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.193194 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="probe" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.193207 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="cinder-scheduler" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.194198 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.196331 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.207102 4932 scope.go:117] "RemoveContainer" containerID="f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7" Feb 18 19:54:36 crc kubenswrapper[4932]: E0218 19:54:36.207861 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7\": container with ID starting with f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7 not found: ID does not exist" containerID="f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.207960 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7"} err="failed to get container status \"f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7\": rpc error: code = NotFound desc = could not find container \"f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7\": container with ID starting with f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7 not found: ID does not exist" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.208040 4932 scope.go:117] "RemoveContainer" containerID="826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb" Feb 18 19:54:36 crc kubenswrapper[4932]: E0218 19:54:36.208656 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb\": container with ID starting with 826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb not found: ID does not exist" containerID="826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.208685 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb"} err="failed to get container status \"826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb\": rpc error: code = NotFound desc = could not find container \"826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb\": container with ID starting with 826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb not found: ID does not exist" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.214401 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.367084 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.367420 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/35c97753-d0c4-44cf-abe0-f529c2899b7d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.367520 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdxh7\" (UniqueName: \"kubernetes.io/projected/35c97753-d0c4-44cf-abe0-f529c2899b7d-kube-api-access-gdxh7\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.367696 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.367800 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-scripts\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.367969 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-config-data\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.470446 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.470852 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-scripts\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.471072 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-config-data\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.471191 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.471255 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/35c97753-d0c4-44cf-abe0-f529c2899b7d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.471333 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdxh7\" (UniqueName: \"kubernetes.io/projected/35c97753-d0c4-44cf-abe0-f529c2899b7d-kube-api-access-gdxh7\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.471379 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/35c97753-d0c4-44cf-abe0-f529c2899b7d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.476907 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.477019 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.477047 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-scripts\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.479984 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-config-data\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.495896 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdxh7\" (UniqueName: \"kubernetes.io/projected/35c97753-d0c4-44cf-abe0-f529c2899b7d-kube-api-access-gdxh7\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.545723 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.759213 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75df984768-5mv9k" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 18 19:54:37 crc kubenswrapper[4932]: I0218 19:54:37.104809 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:37 crc kubenswrapper[4932]: I0218 19:54:37.207290 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08fb57b1-f237-4913-8897-a21202273268" path="/var/lib/kubelet/pods/08fb57b1-f237-4913-8897-a21202273268/volumes" Feb 18 19:54:37 crc kubenswrapper[4932]: I0218 19:54:37.740095 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:37 crc kubenswrapper[4932]: I0218 19:54:37.741703 4932 scope.go:117] "RemoveContainer" containerID="a6a123d69d1a4e46268f089397aa0f2920ef5932f2828721d8716a7b45e1e942" Feb 18 19:54:37 crc kubenswrapper[4932]: I0218 19:54:37.742265 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:38 crc kubenswrapper[4932]: I0218 19:54:38.045592 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerStarted","Data":"ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0"} Feb 18 19:54:38 crc kubenswrapper[4932]: I0218 19:54:38.047321 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"35c97753-d0c4-44cf-abe0-f529c2899b7d","Type":"ContainerStarted","Data":"5ce47eaf86d8b72ef803255871f90147374de82bc9885aece126a16a0fc4ba11"} Feb 18 19:54:38 crc kubenswrapper[4932]: I0218 19:54:38.047348 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"35c97753-d0c4-44cf-abe0-f529c2899b7d","Type":"ContainerStarted","Data":"7066cc7e29345081d0b6a878585df3450e6342538a5bebfb4b825a53f1fd11b0"} Feb 18 19:54:39 crc kubenswrapper[4932]: I0218 19:54:39.058867 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"35c97753-d0c4-44cf-abe0-f529c2899b7d","Type":"ContainerStarted","Data":"1e39bf1869e3a887cfef23f8406dff251742f8aa0fce2d1942783af5ab5ea984"} Feb 18 19:54:39 crc kubenswrapper[4932]: I0218 19:54:39.080191 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.080158497 podStartE2EDuration="3.080158497s" podCreationTimestamp="2026-02-18 19:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:39.075167523 +0000 UTC m=+1242.657122378" watchObservedRunningTime="2026-02-18 19:54:39.080158497 +0000 UTC m=+1242.662113342" Feb 18 19:54:40 crc kubenswrapper[4932]: I0218 19:54:40.087635 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:40 crc kubenswrapper[4932]: I0218 19:54:40.092088 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.084296 4932 generic.go:334] "Generic (PLEG): container finished" podID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" exitCode=1 Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.084369 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerDied","Data":"ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0"} Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.084579 4932 scope.go:117] "RemoveContainer" containerID="a6a123d69d1a4e46268f089397aa0f2920ef5932f2828721d8716a7b45e1e942" Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.085547 4932 scope.go:117] "RemoveContainer" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" Feb 18 19:54:41 crc kubenswrapper[4932]: E0218 19:54:41.085975 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0882c686-1b07-4ac7-a6be-148eff7faa19)\"" pod="openstack/watcher-decision-engine-0" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.120881 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.546427 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.804402 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.924800 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:42 crc kubenswrapper[4932]: I0218 19:54:42.010953 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7449c5884b-q9l4k"] Feb 18 19:54:42 crc kubenswrapper[4932]: I0218 19:54:42.016338 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7449c5884b-q9l4k" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api-log" containerID="cri-o://98b2099bab8a6e146a0799442e00594a5328d749f9417d06bb347ef9fb18f009" gracePeriod=30 Feb 18 19:54:42 crc kubenswrapper[4932]: I0218 19:54:42.016512 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7449c5884b-q9l4k" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api" containerID="cri-o://7df07bd853489d447f256acaae8700635b716bd7ed59696363bdaa6d7cf3ee38" gracePeriod=30 Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.147334 4932 generic.go:334] "Generic (PLEG): container finished" podID="505f490e-dca8-49ae-aeeb-3392c065d841" containerID="7df07bd853489d447f256acaae8700635b716bd7ed59696363bdaa6d7cf3ee38" exitCode=0 Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.147645 4932 generic.go:334] "Generic (PLEG): container finished" podID="505f490e-dca8-49ae-aeeb-3392c065d841" containerID="98b2099bab8a6e146a0799442e00594a5328d749f9417d06bb347ef9fb18f009" exitCode=143 Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.147664 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7449c5884b-q9l4k" event={"ID":"505f490e-dca8-49ae-aeeb-3392c065d841","Type":"ContainerDied","Data":"7df07bd853489d447f256acaae8700635b716bd7ed59696363bdaa6d7cf3ee38"} Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.147688 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7449c5884b-q9l4k" event={"ID":"505f490e-dca8-49ae-aeeb-3392c065d841","Type":"ContainerDied","Data":"98b2099bab8a6e146a0799442e00594a5328d749f9417d06bb347ef9fb18f009"} Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.341258 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.412588 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505f490e-dca8-49ae-aeeb-3392c065d841-logs\") pod \"505f490e-dca8-49ae-aeeb-3392c065d841\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.412656 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data-custom\") pod \"505f490e-dca8-49ae-aeeb-3392c065d841\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.412687 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-combined-ca-bundle\") pod \"505f490e-dca8-49ae-aeeb-3392c065d841\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.412708 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9s68\" (UniqueName: \"kubernetes.io/projected/505f490e-dca8-49ae-aeeb-3392c065d841-kube-api-access-b9s68\") pod \"505f490e-dca8-49ae-aeeb-3392c065d841\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.412910 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data\") pod \"505f490e-dca8-49ae-aeeb-3392c065d841\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.414284 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505f490e-dca8-49ae-aeeb-3392c065d841-logs" (OuterVolumeSpecName: "logs") pod "505f490e-dca8-49ae-aeeb-3392c065d841" (UID: "505f490e-dca8-49ae-aeeb-3392c065d841"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.419358 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "505f490e-dca8-49ae-aeeb-3392c065d841" (UID: "505f490e-dca8-49ae-aeeb-3392c065d841"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.422355 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/505f490e-dca8-49ae-aeeb-3392c065d841-kube-api-access-b9s68" (OuterVolumeSpecName: "kube-api-access-b9s68") pod "505f490e-dca8-49ae-aeeb-3392c065d841" (UID: "505f490e-dca8-49ae-aeeb-3392c065d841"). InnerVolumeSpecName "kube-api-access-b9s68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.465821 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "505f490e-dca8-49ae-aeeb-3392c065d841" (UID: "505f490e-dca8-49ae-aeeb-3392c065d841"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.501987 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data" (OuterVolumeSpecName: "config-data") pod "505f490e-dca8-49ae-aeeb-3392c065d841" (UID: "505f490e-dca8-49ae-aeeb-3392c065d841"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.516607 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505f490e-dca8-49ae-aeeb-3392c065d841-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.516649 4932 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.516664 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.516679 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9s68\" (UniqueName: \"kubernetes.io/projected/505f490e-dca8-49ae-aeeb-3392c065d841-kube-api-access-b9s68\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.516691 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.912317 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.945203 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.997302 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-dc76b87d8-4l7z8"] Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.997754 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-dc76b87d8-4l7z8" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-log" containerID="cri-o://e0a8661d91abe6650a2644a6bbb68f5a9be137c080d73c71bfeeedb79d7a94d1" gracePeriod=30 Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.997805 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-dc76b87d8-4l7z8" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-api" containerID="cri-o://39753a946eb8a2d631a153f1f2e754fb36ce9fa30fb838383236eaf4f306d8fd" gracePeriod=30 Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.156833 4932 generic.go:334] "Generic (PLEG): container finished" podID="86cc3d08-5639-4155-bee3-b1f461184a24" containerID="e0a8661d91abe6650a2644a6bbb68f5a9be137c080d73c71bfeeedb79d7a94d1" exitCode=143 Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.156890 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc76b87d8-4l7z8" event={"ID":"86cc3d08-5639-4155-bee3-b1f461184a24","Type":"ContainerDied","Data":"e0a8661d91abe6650a2644a6bbb68f5a9be137c080d73c71bfeeedb79d7a94d1"} Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.166339 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7449c5884b-q9l4k" event={"ID":"505f490e-dca8-49ae-aeeb-3392c065d841","Type":"ContainerDied","Data":"c7d003d8c5cc0d3edc83d2a07bde218aaf6fe754f628f14115b8310796a97a1b"} Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.166371 4932 scope.go:117] "RemoveContainer" containerID="7df07bd853489d447f256acaae8700635b716bd7ed59696363bdaa6d7cf3ee38" Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.166404 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.222238 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7449c5884b-q9l4k"] Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.223197 4932 scope.go:117] "RemoveContainer" containerID="98b2099bab8a6e146a0799442e00594a5328d749f9417d06bb347ef9fb18f009" Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.229150 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7449c5884b-q9l4k"] Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.190001 4932 generic.go:334] "Generic (PLEG): container finished" podID="86cc3d08-5639-4155-bee3-b1f461184a24" containerID="39753a946eb8a2d631a153f1f2e754fb36ce9fa30fb838383236eaf4f306d8fd" exitCode=0 Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.214827 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" path="/var/lib/kubelet/pods/505f490e-dca8-49ae-aeeb-3392c065d841/volumes" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.215356 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc76b87d8-4l7z8" event={"ID":"86cc3d08-5639-4155-bee3-b1f461184a24","Type":"ContainerDied","Data":"39753a946eb8a2d631a153f1f2e754fb36ce9fa30fb838383236eaf4f306d8fd"} Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.301856 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.362825 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-combined-ca-bundle\") pod \"86cc3d08-5639-4155-bee3-b1f461184a24\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.362892 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq2zn\" (UniqueName: \"kubernetes.io/projected/86cc3d08-5639-4155-bee3-b1f461184a24-kube-api-access-hq2zn\") pod \"86cc3d08-5639-4155-bee3-b1f461184a24\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.362946 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-config-data\") pod \"86cc3d08-5639-4155-bee3-b1f461184a24\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.363017 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-internal-tls-certs\") pod \"86cc3d08-5639-4155-bee3-b1f461184a24\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.363043 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-scripts\") pod \"86cc3d08-5639-4155-bee3-b1f461184a24\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.363153 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-public-tls-certs\") pod \"86cc3d08-5639-4155-bee3-b1f461184a24\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.363263 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cc3d08-5639-4155-bee3-b1f461184a24-logs\") pod \"86cc3d08-5639-4155-bee3-b1f461184a24\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.364154 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86cc3d08-5639-4155-bee3-b1f461184a24-logs" (OuterVolumeSpecName: "logs") pod "86cc3d08-5639-4155-bee3-b1f461184a24" (UID: "86cc3d08-5639-4155-bee3-b1f461184a24"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.371336 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-scripts" (OuterVolumeSpecName: "scripts") pod "86cc3d08-5639-4155-bee3-b1f461184a24" (UID: "86cc3d08-5639-4155-bee3-b1f461184a24"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.381670 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86cc3d08-5639-4155-bee3-b1f461184a24-kube-api-access-hq2zn" (OuterVolumeSpecName: "kube-api-access-hq2zn") pod "86cc3d08-5639-4155-bee3-b1f461184a24" (UID: "86cc3d08-5639-4155-bee3-b1f461184a24"). InnerVolumeSpecName "kube-api-access-hq2zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.437778 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86cc3d08-5639-4155-bee3-b1f461184a24" (UID: "86cc3d08-5639-4155-bee3-b1f461184a24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.466071 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cc3d08-5639-4155-bee3-b1f461184a24-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.466104 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.466113 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq2zn\" (UniqueName: \"kubernetes.io/projected/86cc3d08-5639-4155-bee3-b1f461184a24-kube-api-access-hq2zn\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.466122 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.484596 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "86cc3d08-5639-4155-bee3-b1f461184a24" (UID: "86cc3d08-5639-4155-bee3-b1f461184a24"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.490893 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-config-data" (OuterVolumeSpecName: "config-data") pod "86cc3d08-5639-4155-bee3-b1f461184a24" (UID: "86cc3d08-5639-4155-bee3-b1f461184a24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.500978 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "86cc3d08-5639-4155-bee3-b1f461184a24" (UID: "86cc3d08-5639-4155-bee3-b1f461184a24"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.567682 4932 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.567712 4932 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.567724 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.202687 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc76b87d8-4l7z8" event={"ID":"86cc3d08-5639-4155-bee3-b1f461184a24","Type":"ContainerDied","Data":"f46833318ddc8961d6f04764c058cb88d8c7c195fabe7b752747972666313452"} Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.202754 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.202760 4932 scope.go:117] "RemoveContainer" containerID="39753a946eb8a2d631a153f1f2e754fb36ce9fa30fb838383236eaf4f306d8fd" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.237452 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-dc76b87d8-4l7z8"] Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.239348 4932 scope.go:117] "RemoveContainer" containerID="e0a8661d91abe6650a2644a6bbb68f5a9be137c080d73c71bfeeedb79d7a94d1" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.240997 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-dc76b87d8-4l7z8"] Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.263786 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 19:54:46 crc kubenswrapper[4932]: E0218 19:54:46.264398 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api-log" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264411 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api-log" Feb 18 19:54:46 crc kubenswrapper[4932]: E0218 19:54:46.264423 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264430 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api" Feb 18 19:54:46 crc kubenswrapper[4932]: E0218 19:54:46.264439 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-log" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264447 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-log" Feb 18 19:54:46 crc kubenswrapper[4932]: E0218 19:54:46.264460 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-api" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264466 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-api" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264640 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-log" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264688 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264700 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api-log" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264717 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-api" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.265516 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.267058 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.267592 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.269963 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-k6mtn" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.289736 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.403375 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-openstack-config\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.403452 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.403779 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b926\" (UniqueName: \"kubernetes.io/projected/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-kube-api-access-9b926\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.403882 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-openstack-config-secret\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.505449 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-openstack-config\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.505496 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.505544 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b926\" (UniqueName: \"kubernetes.io/projected/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-kube-api-access-9b926\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.505563 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-openstack-config-secret\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.507199 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-openstack-config\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.510921 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.518614 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-openstack-config-secret\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.522744 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b926\" (UniqueName: \"kubernetes.io/projected/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-kube-api-access-9b926\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.656638 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.737596 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.764492 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75df984768-5mv9k" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.764619 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:54:47 crc kubenswrapper[4932]: W0218 19:54:47.158541 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51bb24d5_d8d7_4bbb_a236_4967f9f7ece5.slice/crio-576317358ec062dd59410cfa39c33984861d50e59d78c4fabb992d58e0aa10f3 WatchSource:0}: Error finding container 576317358ec062dd59410cfa39c33984861d50e59d78c4fabb992d58e0aa10f3: Status 404 returned error can't find the container with id 576317358ec062dd59410cfa39c33984861d50e59d78c4fabb992d58e0aa10f3 Feb 18 19:54:47 crc kubenswrapper[4932]: I0218 19:54:47.159026 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 19:54:47 crc kubenswrapper[4932]: I0218 19:54:47.193964 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" path="/var/lib/kubelet/pods/86cc3d08-5639-4155-bee3-b1f461184a24/volumes" Feb 18 19:54:47 crc kubenswrapper[4932]: I0218 19:54:47.215036 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5","Type":"ContainerStarted","Data":"576317358ec062dd59410cfa39c33984861d50e59d78c4fabb992d58e0aa10f3"} Feb 18 19:54:47 crc kubenswrapper[4932]: I0218 19:54:47.739796 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:47 crc kubenswrapper[4932]: I0218 19:54:47.740113 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:47 crc kubenswrapper[4932]: I0218 19:54:47.740902 4932 scope.go:117] "RemoveContainer" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" Feb 18 19:54:47 crc kubenswrapper[4932]: E0218 19:54:47.741155 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0882c686-1b07-4ac7-a6be-148eff7faa19)\"" pod="openstack/watcher-decision-engine-0" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.241126 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qlt9g"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.242991 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.260678 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qlt9g"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.346331 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-zxht6"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.347746 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.354220 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc8867f-cb56-47ad-9d08-a25feca678fc-operator-scripts\") pod \"nova-api-db-create-qlt9g\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.354334 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdj9x\" (UniqueName: \"kubernetes.io/projected/ccc8867f-cb56-47ad-9d08-a25feca678fc-kube-api-access-fdj9x\") pod \"nova-api-db-create-qlt9g\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.359136 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a786-account-create-update-jrb5b"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.360584 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.366549 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.367094 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zxht6"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.407225 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a786-account-create-update-jrb5b"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.455967 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fds59\" (UniqueName: \"kubernetes.io/projected/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-kube-api-access-fds59\") pod \"nova-cell0-db-create-zxht6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.456060 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvdhg\" (UniqueName: \"kubernetes.io/projected/aec70d32-3fdc-410f-9d9d-9b108e079cfe-kube-api-access-qvdhg\") pod \"nova-api-a786-account-create-update-jrb5b\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.456099 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec70d32-3fdc-410f-9d9d-9b108e079cfe-operator-scripts\") pod \"nova-api-a786-account-create-update-jrb5b\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.456138 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-operator-scripts\") pod \"nova-cell0-db-create-zxht6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.456204 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc8867f-cb56-47ad-9d08-a25feca678fc-operator-scripts\") pod \"nova-api-db-create-qlt9g\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.456274 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdj9x\" (UniqueName: \"kubernetes.io/projected/ccc8867f-cb56-47ad-9d08-a25feca678fc-kube-api-access-fdj9x\") pod \"nova-api-db-create-qlt9g\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.457108 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc8867f-cb56-47ad-9d08-a25feca678fc-operator-scripts\") pod \"nova-api-db-create-qlt9g\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.480772 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdj9x\" (UniqueName: \"kubernetes.io/projected/ccc8867f-cb56-47ad-9d08-a25feca678fc-kube-api-access-fdj9x\") pod \"nova-api-db-create-qlt9g\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.538214 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xdsn5"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.539782 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.548405 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xdsn5"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.554116 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5405-account-create-update-8fjff"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.555363 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.557728 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fds59\" (UniqueName: \"kubernetes.io/projected/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-kube-api-access-fds59\") pod \"nova-cell0-db-create-zxht6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.557804 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvdhg\" (UniqueName: \"kubernetes.io/projected/aec70d32-3fdc-410f-9d9d-9b108e079cfe-kube-api-access-qvdhg\") pod \"nova-api-a786-account-create-update-jrb5b\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.557830 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec70d32-3fdc-410f-9d9d-9b108e079cfe-operator-scripts\") pod \"nova-api-a786-account-create-update-jrb5b\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.557863 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-operator-scripts\") pod \"nova-cell0-db-create-zxht6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.558610 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-operator-scripts\") pod \"nova-cell0-db-create-zxht6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.559424 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec70d32-3fdc-410f-9d9d-9b108e079cfe-operator-scripts\") pod \"nova-api-a786-account-create-update-jrb5b\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.559444 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.567223 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.576694 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvdhg\" (UniqueName: \"kubernetes.io/projected/aec70d32-3fdc-410f-9d9d-9b108e079cfe-kube-api-access-qvdhg\") pod \"nova-api-a786-account-create-update-jrb5b\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.579238 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fds59\" (UniqueName: \"kubernetes.io/projected/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-kube-api-access-fds59\") pod \"nova-cell0-db-create-zxht6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.628010 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5405-account-create-update-8fjff"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.659561 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7703d71c-4ee9-4495-ab74-0a76c148d377-operator-scripts\") pod \"nova-cell1-db-create-xdsn5\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.659719 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-operator-scripts\") pod \"nova-cell0-5405-account-create-update-8fjff\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.659765 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktkjl\" (UniqueName: \"kubernetes.io/projected/7703d71c-4ee9-4495-ab74-0a76c148d377-kube-api-access-ktkjl\") pod \"nova-cell1-db-create-xdsn5\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.659842 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrll\" (UniqueName: \"kubernetes.io/projected/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-kube-api-access-lcrll\") pod \"nova-cell0-5405-account-create-update-8fjff\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.669069 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.683012 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.761340 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcrll\" (UniqueName: \"kubernetes.io/projected/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-kube-api-access-lcrll\") pod \"nova-cell0-5405-account-create-update-8fjff\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.761666 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7703d71c-4ee9-4495-ab74-0a76c148d377-operator-scripts\") pod \"nova-cell1-db-create-xdsn5\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.761751 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-operator-scripts\") pod \"nova-cell0-5405-account-create-update-8fjff\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.761782 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktkjl\" (UniqueName: \"kubernetes.io/projected/7703d71c-4ee9-4495-ab74-0a76c148d377-kube-api-access-ktkjl\") pod \"nova-cell1-db-create-xdsn5\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.763399 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-operator-scripts\") pod \"nova-cell0-5405-account-create-update-8fjff\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.768260 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7703d71c-4ee9-4495-ab74-0a76c148d377-operator-scripts\") pod \"nova-cell1-db-create-xdsn5\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.780757 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-2fd4-account-create-update-s9r68"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.782404 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.792545 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktkjl\" (UniqueName: \"kubernetes.io/projected/7703d71c-4ee9-4495-ab74-0a76c148d377-kube-api-access-ktkjl\") pod \"nova-cell1-db-create-xdsn5\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.804770 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.805114 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.814153 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcrll\" (UniqueName: \"kubernetes.io/projected/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-kube-api-access-lcrll\") pod \"nova-cell0-5405-account-create-update-8fjff\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.819902 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.827161 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2fd4-account-create-update-s9r68"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.868731 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9cbq\" (UniqueName: \"kubernetes.io/projected/20264fab-dfb6-4e8c-90c3-755f6877b798-kube-api-access-n9cbq\") pod \"nova-cell1-2fd4-account-create-update-s9r68\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.869083 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20264fab-dfb6-4e8c-90c3-755f6877b798-operator-scripts\") pod \"nova-cell1-2fd4-account-create-update-s9r68\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.910656 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-76d44d77c9-sdq6t"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.912247 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.916191 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.916336 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.916374 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.921262 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-76d44d77c9-sdq6t"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.971226 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20264fab-dfb6-4e8c-90c3-755f6877b798-operator-scripts\") pod \"nova-cell1-2fd4-account-create-update-s9r68\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.971336 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9cbq\" (UniqueName: \"kubernetes.io/projected/20264fab-dfb6-4e8c-90c3-755f6877b798-kube-api-access-n9cbq\") pod \"nova-cell1-2fd4-account-create-update-s9r68\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.972500 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20264fab-dfb6-4e8c-90c3-755f6877b798-operator-scripts\") pod \"nova-cell1-2fd4-account-create-update-s9r68\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.994708 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9cbq\" (UniqueName: \"kubernetes.io/projected/20264fab-dfb6-4e8c-90c3-755f6877b798-kube-api-access-n9cbq\") pod \"nova-cell1-2fd4-account-create-update-s9r68\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.074028 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d359b774-654c-4532-8f81-e1beddd68479-run-httpd\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.074359 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hzmm\" (UniqueName: \"kubernetes.io/projected/d359b774-654c-4532-8f81-e1beddd68479-kube-api-access-4hzmm\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.074464 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-public-tls-certs\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.074586 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-combined-ca-bundle\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.074728 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-internal-tls-certs\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.074821 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d359b774-654c-4532-8f81-e1beddd68479-etc-swift\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.074929 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-config-data\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.075022 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d359b774-654c-4532-8f81-e1beddd68479-log-httpd\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.131738 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177126 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-public-tls-certs\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177245 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-combined-ca-bundle\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177328 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-internal-tls-certs\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177350 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d359b774-654c-4532-8f81-e1beddd68479-etc-swift\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177379 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-config-data\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177422 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d359b774-654c-4532-8f81-e1beddd68479-log-httpd\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177525 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d359b774-654c-4532-8f81-e1beddd68479-run-httpd\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177594 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hzmm\" (UniqueName: \"kubernetes.io/projected/d359b774-654c-4532-8f81-e1beddd68479-kube-api-access-4hzmm\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.178439 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d359b774-654c-4532-8f81-e1beddd68479-log-httpd\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.188600 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-internal-tls-certs\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.188822 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d359b774-654c-4532-8f81-e1beddd68479-etc-swift\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.189231 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-public-tls-certs\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.189229 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-config-data\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.189592 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d359b774-654c-4532-8f81-e1beddd68479-run-httpd\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.189959 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-combined-ca-bundle\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.196121 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hzmm\" (UniqueName: \"kubernetes.io/projected/d359b774-654c-4532-8f81-e1beddd68479-kube-api-access-4hzmm\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.240588 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.248274 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qlt9g"] Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.281495 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.289122 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-central-agent" containerID="cri-o://b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048" gracePeriod=30 Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.290060 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="proxy-httpd" containerID="cri-o://4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c" gracePeriod=30 Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.290250 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="sg-core" containerID="cri-o://f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09" gracePeriod=30 Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.290272 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-notification-agent" containerID="cri-o://7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41" gracePeriod=30 Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.297198 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qlt9g" event={"ID":"ccc8867f-cb56-47ad-9d08-a25feca678fc","Type":"ContainerStarted","Data":"522c3953c61b5f262db6bc25a0ecf8315469173cbfa7c3d2fd3d78690775ae88"} Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.306840 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.184:3000/\": EOF" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.336270 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a786-account-create-update-jrb5b"] Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.422130 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zxht6"] Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.554319 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5405-account-create-update-8fjff"] Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.595720 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xdsn5"] Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.859912 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2fd4-account-create-update-s9r68"] Feb 18 19:54:49 crc kubenswrapper[4932]: W0218 19:54:49.874466 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20264fab_dfb6_4e8c_90c3_755f6877b798.slice/crio-052fd6b2aa50cb6fcbae51d38be6c0dd30d9db9bd759de575071ca146e8edf7a WatchSource:0}: Error finding container 052fd6b2aa50cb6fcbae51d38be6c0dd30d9db9bd759de575071ca146e8edf7a: Status 404 returned error can't find the container with id 052fd6b2aa50cb6fcbae51d38be6c0dd30d9db9bd759de575071ca146e8edf7a Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.181055 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-76d44d77c9-sdq6t"] Feb 18 19:54:50 crc kubenswrapper[4932]: W0218 19:54:50.237672 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd359b774_654c_4532_8f81_e1beddd68479.slice/crio-cd00a3bf7f934316cd7886a97fd96cace53f2b25ac02b0bb0a6004bf3ecc2428 WatchSource:0}: Error finding container cd00a3bf7f934316cd7886a97fd96cace53f2b25ac02b0bb0a6004bf3ecc2428: Status 404 returned error can't find the container with id cd00a3bf7f934316cd7886a97fd96cace53f2b25ac02b0bb0a6004bf3ecc2428 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.314026 4932 generic.go:334] "Generic (PLEG): container finished" podID="aec70d32-3fdc-410f-9d9d-9b108e079cfe" containerID="a9bd3203306587d945952a2d8b8a38aa992a6b26567d9b7e7b075edf3005412d" exitCode=0 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.314109 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a786-account-create-update-jrb5b" event={"ID":"aec70d32-3fdc-410f-9d9d-9b108e079cfe","Type":"ContainerDied","Data":"a9bd3203306587d945952a2d8b8a38aa992a6b26567d9b7e7b075edf3005412d"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.314134 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a786-account-create-update-jrb5b" event={"ID":"aec70d32-3fdc-410f-9d9d-9b108e079cfe","Type":"ContainerStarted","Data":"eccf3794a52ee544f0c22944383f1094d40cfa00baf41cfaf87d8812fbfa11b9"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.317163 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5405-account-create-update-8fjff" event={"ID":"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3","Type":"ContainerStarted","Data":"5ccb855943775d6e9adaf49444e172677634a8b560d436edfff1c39a86a31e48"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.317212 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5405-account-create-update-8fjff" event={"ID":"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3","Type":"ContainerStarted","Data":"66cc7eba623075a858422eb55af26df80c38d6f6aee87f6f13279af0d186f3b3"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.332533 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xdsn5" event={"ID":"7703d71c-4ee9-4495-ab74-0a76c148d377","Type":"ContainerStarted","Data":"2eac601de5fc1220879b1962da46431b85d3f67bca44ebc6031ccc59809d3f58"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.332589 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xdsn5" event={"ID":"7703d71c-4ee9-4495-ab74-0a76c148d377","Type":"ContainerStarted","Data":"0fd5d1eb515a389872d9f7400736a47e9170b5a4b1480bff777bfe89c3983124"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.359499 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.363385 4932 generic.go:334] "Generic (PLEG): container finished" podID="a6ae5264-a3f4-4f05-b7ff-942b182ee6e6" containerID="708bd68c17f2cb8bb6aefdb45fc9ab2a2b088e8be75ba3d7c52b1b8b365c0f1f" exitCode=0 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.363438 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zxht6" event={"ID":"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6","Type":"ContainerDied","Data":"708bd68c17f2cb8bb6aefdb45fc9ab2a2b088e8be75ba3d7c52b1b8b365c0f1f"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.363460 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zxht6" event={"ID":"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6","Type":"ContainerStarted","Data":"2b9d7295e4991fa83586cb008679930ba6602febda912ae2e974202145b5bda9"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.369365 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76d44d77c9-sdq6t" event={"ID":"d359b774-654c-4532-8f81-e1beddd68479","Type":"ContainerStarted","Data":"cd00a3bf7f934316cd7886a97fd96cace53f2b25ac02b0bb0a6004bf3ecc2428"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.375335 4932 generic.go:334] "Generic (PLEG): container finished" podID="ccc8867f-cb56-47ad-9d08-a25feca678fc" containerID="561bed36cff9fe4632c1003655b4ef598d4e8ea47f27f52a6c7b3f87e135ec7f" exitCode=0 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.375370 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qlt9g" event={"ID":"ccc8867f-cb56-47ad-9d08-a25feca678fc","Type":"ContainerDied","Data":"561bed36cff9fe4632c1003655b4ef598d4e8ea47f27f52a6c7b3f87e135ec7f"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.388926 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5405-account-create-update-8fjff" podStartSLOduration=2.388912498 podStartE2EDuration="2.388912498s" podCreationTimestamp="2026-02-18 19:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:50.373148688 +0000 UTC m=+1253.955103533" watchObservedRunningTime="2026-02-18 19:54:50.388912498 +0000 UTC m=+1253.970867343" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.389232 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-xdsn5" podStartSLOduration=2.389227246 podStartE2EDuration="2.389227246s" podCreationTimestamp="2026-02-18 19:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:50.387121104 +0000 UTC m=+1253.969075949" watchObservedRunningTime="2026-02-18 19:54:50.389227246 +0000 UTC m=+1253.971182091" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394339 4932 generic.go:334] "Generic (PLEG): container finished" podID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerID="4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c" exitCode=0 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394360 4932 generic.go:334] "Generic (PLEG): container finished" podID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerID="f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09" exitCode=2 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394369 4932 generic.go:334] "Generic (PLEG): container finished" podID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerID="7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41" exitCode=0 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394375 4932 generic.go:334] "Generic (PLEG): container finished" podID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerID="b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048" exitCode=0 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394417 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerDied","Data":"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394446 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerDied","Data":"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394456 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerDied","Data":"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394451 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394475 4932 scope.go:117] "RemoveContainer" containerID="4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394465 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerDied","Data":"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.397050 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" event={"ID":"20264fab-dfb6-4e8c-90c3-755f6877b798","Type":"ContainerStarted","Data":"052fd6b2aa50cb6fcbae51d38be6c0dd30d9db9bd759de575071ca146e8edf7a"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.412621 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-sg-core-conf-yaml\") pod \"f81248d0-bf30-4447-ad78-7bfe9048bbea\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.412704 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-scripts\") pod \"f81248d0-bf30-4447-ad78-7bfe9048bbea\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.412726 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-combined-ca-bundle\") pod \"f81248d0-bf30-4447-ad78-7bfe9048bbea\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.412834 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-run-httpd\") pod \"f81248d0-bf30-4447-ad78-7bfe9048bbea\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.412971 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5dzv\" (UniqueName: \"kubernetes.io/projected/f81248d0-bf30-4447-ad78-7bfe9048bbea-kube-api-access-x5dzv\") pod \"f81248d0-bf30-4447-ad78-7bfe9048bbea\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.412996 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-config-data\") pod \"f81248d0-bf30-4447-ad78-7bfe9048bbea\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.413018 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-log-httpd\") pod \"f81248d0-bf30-4447-ad78-7bfe9048bbea\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.414872 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f81248d0-bf30-4447-ad78-7bfe9048bbea" (UID: "f81248d0-bf30-4447-ad78-7bfe9048bbea"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.418235 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f81248d0-bf30-4447-ad78-7bfe9048bbea" (UID: "f81248d0-bf30-4447-ad78-7bfe9048bbea"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.418523 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-scripts" (OuterVolumeSpecName: "scripts") pod "f81248d0-bf30-4447-ad78-7bfe9048bbea" (UID: "f81248d0-bf30-4447-ad78-7bfe9048bbea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.421821 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f81248d0-bf30-4447-ad78-7bfe9048bbea-kube-api-access-x5dzv" (OuterVolumeSpecName: "kube-api-access-x5dzv") pod "f81248d0-bf30-4447-ad78-7bfe9048bbea" (UID: "f81248d0-bf30-4447-ad78-7bfe9048bbea"). InnerVolumeSpecName "kube-api-access-x5dzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.450377 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f81248d0-bf30-4447-ad78-7bfe9048bbea" (UID: "f81248d0-bf30-4447-ad78-7bfe9048bbea"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.457205 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" podStartSLOduration=2.457187361 podStartE2EDuration="2.457187361s" podCreationTimestamp="2026-02-18 19:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:50.443519743 +0000 UTC m=+1254.025474588" watchObservedRunningTime="2026-02-18 19:54:50.457187361 +0000 UTC m=+1254.039142206" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.487855 4932 scope.go:117] "RemoveContainer" containerID="f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.515327 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.515361 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5dzv\" (UniqueName: \"kubernetes.io/projected/f81248d0-bf30-4447-ad78-7bfe9048bbea-kube-api-access-x5dzv\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.515372 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.515380 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.515388 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.592320 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-config-data" (OuterVolumeSpecName: "config-data") pod "f81248d0-bf30-4447-ad78-7bfe9048bbea" (UID: "f81248d0-bf30-4447-ad78-7bfe9048bbea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.617543 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.618700 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f81248d0-bf30-4447-ad78-7bfe9048bbea" (UID: "f81248d0-bf30-4447-ad78-7bfe9048bbea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.703187 4932 scope.go:117] "RemoveContainer" containerID="7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.718782 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.732602 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.749158 4932 scope.go:117] "RemoveContainer" containerID="b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.751355 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.767525 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.768038 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-central-agent" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768062 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-central-agent" Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.768082 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="proxy-httpd" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768091 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="proxy-httpd" Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.768105 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-notification-agent" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768113 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-notification-agent" Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.768157 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="sg-core" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768165 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="sg-core" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768421 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="sg-core" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768447 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-notification-agent" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768474 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="proxy-httpd" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768493 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-central-agent" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.782444 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.782553 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.784708 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.784781 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.792546 4932 scope.go:117] "RemoveContainer" containerID="4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c" Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.794149 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c\": container with ID starting with 4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c not found: ID does not exist" containerID="4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.794190 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c"} err="failed to get container status \"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c\": rpc error: code = NotFound desc = could not find container \"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c\": container with ID starting with 4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.794212 4932 scope.go:117] "RemoveContainer" containerID="f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09" Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.794592 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09\": container with ID starting with f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09 not found: ID does not exist" containerID="f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.794613 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09"} err="failed to get container status \"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09\": rpc error: code = NotFound desc = could not find container \"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09\": container with ID starting with f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.794625 4932 scope.go:117] "RemoveContainer" containerID="7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41" Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.795530 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41\": container with ID starting with 7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41 not found: ID does not exist" containerID="7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.795552 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41"} err="failed to get container status \"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41\": rpc error: code = NotFound desc = could not find container \"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41\": container with ID starting with 7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.795564 4932 scope.go:117] "RemoveContainer" containerID="b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048" Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.797240 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048\": container with ID starting with b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048 not found: ID does not exist" containerID="b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.797332 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048"} err="failed to get container status \"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048\": rpc error: code = NotFound desc = could not find container \"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048\": container with ID starting with b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.797377 4932 scope.go:117] "RemoveContainer" containerID="4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.798853 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c"} err="failed to get container status \"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c\": rpc error: code = NotFound desc = could not find container \"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c\": container with ID starting with 4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.798875 4932 scope.go:117] "RemoveContainer" containerID="f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.799351 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09"} err="failed to get container status \"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09\": rpc error: code = NotFound desc = could not find container \"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09\": container with ID starting with f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.799369 4932 scope.go:117] "RemoveContainer" containerID="7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.799524 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41"} err="failed to get container status \"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41\": rpc error: code = NotFound desc = could not find container \"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41\": container with ID starting with 7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.799539 4932 scope.go:117] "RemoveContainer" containerID="b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.800256 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048"} err="failed to get container status \"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048\": rpc error: code = NotFound desc = could not find container \"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048\": container with ID starting with b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.800297 4932 scope.go:117] "RemoveContainer" containerID="4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.804380 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c"} err="failed to get container status \"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c\": rpc error: code = NotFound desc = could not find container \"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c\": container with ID starting with 4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.804424 4932 scope.go:117] "RemoveContainer" containerID="f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.807289 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09"} err="failed to get container status \"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09\": rpc error: code = NotFound desc = could not find container \"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09\": container with ID starting with f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.807340 4932 scope.go:117] "RemoveContainer" containerID="7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.807789 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41"} err="failed to get container status \"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41\": rpc error: code = NotFound desc = could not find container \"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41\": container with ID starting with 7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.807829 4932 scope.go:117] "RemoveContainer" containerID="b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.809156 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048"} err="failed to get container status \"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048\": rpc error: code = NotFound desc = could not find container \"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048\": container with ID starting with b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.821110 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.821166 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-log-httpd\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.821220 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.821286 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tm9j\" (UniqueName: \"kubernetes.io/projected/42f96153-201b-4efb-952d-ec27dcbd8c0c-kube-api-access-8tm9j\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.821393 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-run-httpd\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.821428 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-config-data\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.821459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-scripts\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.923704 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-log-httpd\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.923781 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.923819 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tm9j\" (UniqueName: \"kubernetes.io/projected/42f96153-201b-4efb-952d-ec27dcbd8c0c-kube-api-access-8tm9j\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.923904 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-run-httpd\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.923926 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-config-data\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.923970 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-scripts\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.924024 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.924854 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-log-httpd\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.925732 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-run-httpd\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.929970 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.932026 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-config-data\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.932841 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.932950 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-scripts\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.939043 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tm9j\" (UniqueName: \"kubernetes.io/projected/42f96153-201b-4efb-952d-ec27dcbd8c0c-kube-api-access-8tm9j\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.139242 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.211055 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" path="/var/lib/kubelet/pods/f81248d0-bf30-4447-ad78-7bfe9048bbea/volumes" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.409963 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.417676 4932 generic.go:334] "Generic (PLEG): container finished" podID="20264fab-dfb6-4e8c-90c3-755f6877b798" containerID="35671852602ab05670d4f45f3855e4d52f08702c9d127db3894e27656cb622ec" exitCode=0 Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.417713 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" event={"ID":"20264fab-dfb6-4e8c-90c3-755f6877b798","Type":"ContainerDied","Data":"35671852602ab05670d4f45f3855e4d52f08702c9d127db3894e27656cb622ec"} Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.423010 4932 generic.go:334] "Generic (PLEG): container finished" podID="b44b5c9c-2c44-4e46-a14f-a8a0c93781d3" containerID="5ccb855943775d6e9adaf49444e172677634a8b560d436edfff1c39a86a31e48" exitCode=0 Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.423075 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5405-account-create-update-8fjff" event={"ID":"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3","Type":"ContainerDied","Data":"5ccb855943775d6e9adaf49444e172677634a8b560d436edfff1c39a86a31e48"} Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.431395 4932 generic.go:334] "Generic (PLEG): container finished" podID="7703d71c-4ee9-4495-ab74-0a76c148d377" containerID="2eac601de5fc1220879b1962da46431b85d3f67bca44ebc6031ccc59809d3f58" exitCode=0 Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.431479 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xdsn5" event={"ID":"7703d71c-4ee9-4495-ab74-0a76c148d377","Type":"ContainerDied","Data":"2eac601de5fc1220879b1962da46431b85d3f67bca44ebc6031ccc59809d3f58"} Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.456884 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76d44d77c9-sdq6t" event={"ID":"d359b774-654c-4532-8f81-e1beddd68479","Type":"ContainerStarted","Data":"9ef9fe7fd0dd824a4a8a97997a3b05a087e8e402d080eb76fa8c5145581ddd86"} Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.456963 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76d44d77c9-sdq6t" event={"ID":"d359b774-654c-4532-8f81-e1beddd68479","Type":"ContainerStarted","Data":"828c4029ea8fb0715354ae424762b45e3d362d72ff1a5f3ab9d98154f78c36b0"} Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.458018 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.458048 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.522761 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-76d44d77c9-sdq6t" podStartSLOduration=3.522746574 podStartE2EDuration="3.522746574s" podCreationTimestamp="2026-02-18 19:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:51.50001009 +0000 UTC m=+1255.081964935" watchObservedRunningTime="2026-02-18 19:54:51.522746574 +0000 UTC m=+1255.104701419" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.610610 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.905041 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.948748 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec70d32-3fdc-410f-9d9d-9b108e079cfe-operator-scripts\") pod \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.948785 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvdhg\" (UniqueName: \"kubernetes.io/projected/aec70d32-3fdc-410f-9d9d-9b108e079cfe-kube-api-access-qvdhg\") pod \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.954297 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec70d32-3fdc-410f-9d9d-9b108e079cfe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aec70d32-3fdc-410f-9d9d-9b108e079cfe" (UID: "aec70d32-3fdc-410f-9d9d-9b108e079cfe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.954869 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec70d32-3fdc-410f-9d9d-9b108e079cfe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.961415 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec70d32-3fdc-410f-9d9d-9b108e079cfe-kube-api-access-qvdhg" (OuterVolumeSpecName: "kube-api-access-qvdhg") pod "aec70d32-3fdc-410f-9d9d-9b108e079cfe" (UID: "aec70d32-3fdc-410f-9d9d-9b108e079cfe"). InnerVolumeSpecName "kube-api-access-qvdhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.047619 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.056238 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvdhg\" (UniqueName: \"kubernetes.io/projected/aec70d32-3fdc-410f-9d9d-9b108e079cfe-kube-api-access-qvdhg\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.059212 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.172312 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc8867f-cb56-47ad-9d08-a25feca678fc-operator-scripts\") pod \"ccc8867f-cb56-47ad-9d08-a25feca678fc\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.172438 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fds59\" (UniqueName: \"kubernetes.io/projected/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-kube-api-access-fds59\") pod \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.172514 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdj9x\" (UniqueName: \"kubernetes.io/projected/ccc8867f-cb56-47ad-9d08-a25feca678fc-kube-api-access-fdj9x\") pod \"ccc8867f-cb56-47ad-9d08-a25feca678fc\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.172656 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-operator-scripts\") pod \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.180480 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6ae5264-a3f4-4f05-b7ff-942b182ee6e6" (UID: "a6ae5264-a3f4-4f05-b7ff-942b182ee6e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.180563 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc8867f-cb56-47ad-9d08-a25feca678fc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ccc8867f-cb56-47ad-9d08-a25feca678fc" (UID: "ccc8867f-cb56-47ad-9d08-a25feca678fc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.184726 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-kube-api-access-fds59" (OuterVolumeSpecName: "kube-api-access-fds59") pod "a6ae5264-a3f4-4f05-b7ff-942b182ee6e6" (UID: "a6ae5264-a3f4-4f05-b7ff-942b182ee6e6"). InnerVolumeSpecName "kube-api-access-fds59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.193548 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc8867f-cb56-47ad-9d08-a25feca678fc-kube-api-access-fdj9x" (OuterVolumeSpecName: "kube-api-access-fdj9x") pod "ccc8867f-cb56-47ad-9d08-a25feca678fc" (UID: "ccc8867f-cb56-47ad-9d08-a25feca678fc"). InnerVolumeSpecName "kube-api-access-fdj9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.274738 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.274785 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc8867f-cb56-47ad-9d08-a25feca678fc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.274799 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fds59\" (UniqueName: \"kubernetes.io/projected/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-kube-api-access-fds59\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.274814 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdj9x\" (UniqueName: \"kubernetes.io/projected/ccc8867f-cb56-47ad-9d08-a25feca678fc-kube-api-access-fdj9x\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.483665 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a786-account-create-update-jrb5b" event={"ID":"aec70d32-3fdc-410f-9d9d-9b108e079cfe","Type":"ContainerDied","Data":"eccf3794a52ee544f0c22944383f1094d40cfa00baf41cfaf87d8812fbfa11b9"} Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.483712 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eccf3794a52ee544f0c22944383f1094d40cfa00baf41cfaf87d8812fbfa11b9" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.483783 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.489829 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerStarted","Data":"06ac8abe3739afaf69ebc58f3baaf3e27bfecb005ce000a65602459c58cfcb6e"} Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.489871 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerStarted","Data":"3d2064737241a9f7bf6098cc357b389e019add37b83420b1cfe158e700514b8a"} Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.489879 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerStarted","Data":"bb80acf58868f86869a2edd8ebddc1372e30bf85bb8346fbc78e3b03f8adb9d4"} Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.497617 4932 generic.go:334] "Generic (PLEG): container finished" podID="dec0e208-2bfc-4661-8395-c56418bb9307" containerID="8938c10b66b4f6d7e20437bee59ce3c16a7181c0a809f3e865b01b219862d8d7" exitCode=137 Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.497672 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75df984768-5mv9k" event={"ID":"dec0e208-2bfc-4661-8395-c56418bb9307","Type":"ContainerDied","Data":"8938c10b66b4f6d7e20437bee59ce3c16a7181c0a809f3e865b01b219862d8d7"} Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.514347 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zxht6" event={"ID":"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6","Type":"ContainerDied","Data":"2b9d7295e4991fa83586cb008679930ba6602febda912ae2e974202145b5bda9"} Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.514389 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b9d7295e4991fa83586cb008679930ba6602febda912ae2e974202145b5bda9" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.514444 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.532034 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qlt9g" event={"ID":"ccc8867f-cb56-47ad-9d08-a25feca678fc","Type":"ContainerDied","Data":"522c3953c61b5f262db6bc25a0ecf8315469173cbfa7c3d2fd3d78690775ae88"} Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.532093 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="522c3953c61b5f262db6bc25a0ecf8315469173cbfa7c3d2fd3d78690775ae88" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.532046 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.704048 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.782644 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-tls-certs\") pod \"dec0e208-2bfc-4661-8395-c56418bb9307\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.782717 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-scripts\") pod \"dec0e208-2bfc-4661-8395-c56418bb9307\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.782855 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-config-data\") pod \"dec0e208-2bfc-4661-8395-c56418bb9307\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.782905 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec0e208-2bfc-4661-8395-c56418bb9307-logs\") pod \"dec0e208-2bfc-4661-8395-c56418bb9307\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.783025 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-combined-ca-bundle\") pod \"dec0e208-2bfc-4661-8395-c56418bb9307\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.783072 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h566q\" (UniqueName: \"kubernetes.io/projected/dec0e208-2bfc-4661-8395-c56418bb9307-kube-api-access-h566q\") pod \"dec0e208-2bfc-4661-8395-c56418bb9307\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.783113 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-secret-key\") pod \"dec0e208-2bfc-4661-8395-c56418bb9307\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.785286 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dec0e208-2bfc-4661-8395-c56418bb9307-logs" (OuterVolumeSpecName: "logs") pod "dec0e208-2bfc-4661-8395-c56418bb9307" (UID: "dec0e208-2bfc-4661-8395-c56418bb9307"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.792637 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "dec0e208-2bfc-4661-8395-c56418bb9307" (UID: "dec0e208-2bfc-4661-8395-c56418bb9307"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.804447 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec0e208-2bfc-4661-8395-c56418bb9307-kube-api-access-h566q" (OuterVolumeSpecName: "kube-api-access-h566q") pod "dec0e208-2bfc-4661-8395-c56418bb9307" (UID: "dec0e208-2bfc-4661-8395-c56418bb9307"). InnerVolumeSpecName "kube-api-access-h566q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.820665 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-config-data" (OuterVolumeSpecName: "config-data") pod "dec0e208-2bfc-4661-8395-c56418bb9307" (UID: "dec0e208-2bfc-4661-8395-c56418bb9307"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.875862 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dec0e208-2bfc-4661-8395-c56418bb9307" (UID: "dec0e208-2bfc-4661-8395-c56418bb9307"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.878782 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-scripts" (OuterVolumeSpecName: "scripts") pod "dec0e208-2bfc-4661-8395-c56418bb9307" (UID: "dec0e208-2bfc-4661-8395-c56418bb9307"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.895688 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.895725 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec0e208-2bfc-4661-8395-c56418bb9307-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.895737 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.895751 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h566q\" (UniqueName: \"kubernetes.io/projected/dec0e208-2bfc-4661-8395-c56418bb9307-kube-api-access-h566q\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.895764 4932 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.895774 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.948533 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.970950 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "dec0e208-2bfc-4661-8395-c56418bb9307" (UID: "dec0e208-2bfc-4661-8395-c56418bb9307"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.997373 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9cbq\" (UniqueName: \"kubernetes.io/projected/20264fab-dfb6-4e8c-90c3-755f6877b798-kube-api-access-n9cbq\") pod \"20264fab-dfb6-4e8c-90c3-755f6877b798\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.997455 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20264fab-dfb6-4e8c-90c3-755f6877b798-operator-scripts\") pod \"20264fab-dfb6-4e8c-90c3-755f6877b798\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.997999 4932 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.998370 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20264fab-dfb6-4e8c-90c3-755f6877b798-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20264fab-dfb6-4e8c-90c3-755f6877b798" (UID: "20264fab-dfb6-4e8c-90c3-755f6877b798"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.006410 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20264fab-dfb6-4e8c-90c3-755f6877b798-kube-api-access-n9cbq" (OuterVolumeSpecName: "kube-api-access-n9cbq") pod "20264fab-dfb6-4e8c-90c3-755f6877b798" (UID: "20264fab-dfb6-4e8c-90c3-755f6877b798"). InnerVolumeSpecName "kube-api-access-n9cbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.045997 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.078561 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.099016 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcrll\" (UniqueName: \"kubernetes.io/projected/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-kube-api-access-lcrll\") pod \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.103231 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-operator-scripts\") pod \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.103930 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b44b5c9c-2c44-4e46-a14f-a8a0c93781d3" (UID: "b44b5c9c-2c44-4e46-a14f-a8a0c93781d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.105320 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9cbq\" (UniqueName: \"kubernetes.io/projected/20264fab-dfb6-4e8c-90c3-755f6877b798-kube-api-access-n9cbq\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.105344 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20264fab-dfb6-4e8c-90c3-755f6877b798-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.105353 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.105608 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-kube-api-access-lcrll" (OuterVolumeSpecName: "kube-api-access-lcrll") pod "b44b5c9c-2c44-4e46-a14f-a8a0c93781d3" (UID: "b44b5c9c-2c44-4e46-a14f-a8a0c93781d3"). InnerVolumeSpecName "kube-api-access-lcrll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.209113 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktkjl\" (UniqueName: \"kubernetes.io/projected/7703d71c-4ee9-4495-ab74-0a76c148d377-kube-api-access-ktkjl\") pod \"7703d71c-4ee9-4495-ab74-0a76c148d377\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.209546 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7703d71c-4ee9-4495-ab74-0a76c148d377-operator-scripts\") pod \"7703d71c-4ee9-4495-ab74-0a76c148d377\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.210534 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcrll\" (UniqueName: \"kubernetes.io/projected/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-kube-api-access-lcrll\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.210925 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7703d71c-4ee9-4495-ab74-0a76c148d377-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7703d71c-4ee9-4495-ab74-0a76c148d377" (UID: "7703d71c-4ee9-4495-ab74-0a76c148d377"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.213656 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7703d71c-4ee9-4495-ab74-0a76c148d377-kube-api-access-ktkjl" (OuterVolumeSpecName: "kube-api-access-ktkjl") pod "7703d71c-4ee9-4495-ab74-0a76c148d377" (UID: "7703d71c-4ee9-4495-ab74-0a76c148d377"). InnerVolumeSpecName "kube-api-access-ktkjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.313286 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7703d71c-4ee9-4495-ab74-0a76c148d377-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.313340 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktkjl\" (UniqueName: \"kubernetes.io/projected/7703d71c-4ee9-4495-ab74-0a76c148d377-kube-api-access-ktkjl\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.545401 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" event={"ID":"20264fab-dfb6-4e8c-90c3-755f6877b798","Type":"ContainerDied","Data":"052fd6b2aa50cb6fcbae51d38be6c0dd30d9db9bd759de575071ca146e8edf7a"} Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.545467 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="052fd6b2aa50cb6fcbae51d38be6c0dd30d9db9bd759de575071ca146e8edf7a" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.545564 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.548521 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerStarted","Data":"ce107b52c8f3445b9ec76a561859b862dc62dc9976d0d2ac59adfc9855cf6bd0"} Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.551482 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75df984768-5mv9k" event={"ID":"dec0e208-2bfc-4661-8395-c56418bb9307","Type":"ContainerDied","Data":"0a23db9200dc7e24b7810e1e26b3a65a213a638cce894066f30cf730bad21368"} Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.551597 4932 scope.go:117] "RemoveContainer" containerID="c14c2db9c2e97146ded5c1be64f375a20e4d3dc8027f2eb556b8226700b572e9" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.551788 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.556112 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5405-account-create-update-8fjff" event={"ID":"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3","Type":"ContainerDied","Data":"66cc7eba623075a858422eb55af26df80c38d6f6aee87f6f13279af0d186f3b3"} Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.556152 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66cc7eba623075a858422eb55af26df80c38d6f6aee87f6f13279af0d186f3b3" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.556152 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.566261 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xdsn5" event={"ID":"7703d71c-4ee9-4495-ab74-0a76c148d377","Type":"ContainerDied","Data":"0fd5d1eb515a389872d9f7400736a47e9170b5a4b1480bff777bfe89c3983124"} Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.566318 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fd5d1eb515a389872d9f7400736a47e9170b5a4b1480bff777bfe89c3983124" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.566409 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.599155 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75df984768-5mv9k"] Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.606937 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75df984768-5mv9k"] Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.753405 4932 scope.go:117] "RemoveContainer" containerID="8938c10b66b4f6d7e20437bee59ce3c16a7181c0a809f3e865b01b219862d8d7" Feb 18 19:54:55 crc kubenswrapper[4932]: I0218 19:54:55.189920 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" path="/var/lib/kubelet/pods/dec0e208-2bfc-4661-8395-c56418bb9307/volumes" Feb 18 19:54:57 crc kubenswrapper[4932]: I0218 19:54:57.640326 4932 generic.go:334] "Generic (PLEG): container finished" podID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerID="ef6f599a3d418b9b543c98edcdf9f0f0c968498f5968cc9b3a5e3260ba0ced73" exitCode=137 Feb 18 19:54:57 crc kubenswrapper[4932]: I0218 19:54:57.640449 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30bd9d4f-e84f-4320-9057-80d3d53f7ebb","Type":"ContainerDied","Data":"ef6f599a3d418b9b543c98edcdf9f0f0c968498f5968cc9b3a5e3260ba0ced73"} Feb 18 19:54:57 crc kubenswrapper[4932]: I0218 19:54:57.818109 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.183:8776/healthcheck\": dial tcp 10.217.0.183:8776: connect: connection refused" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.180529 4932 scope.go:117] "RemoveContainer" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.180763 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0882c686-1b07-4ac7-a6be-148eff7faa19)\"" pod="openstack/watcher-decision-engine-0" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.810273 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-64b8m"] Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.810648 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44b5c9c-2c44-4e46-a14f-a8a0c93781d3" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.810661 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44b5c9c-2c44-4e46-a14f-a8a0c93781d3" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.810677 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon-log" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.810684 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon-log" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.810695 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20264fab-dfb6-4e8c-90c3-755f6877b798" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.810703 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="20264fab-dfb6-4e8c-90c3-755f6877b798" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.810728 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7703d71c-4ee9-4495-ab74-0a76c148d377" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.810734 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7703d71c-4ee9-4495-ab74-0a76c148d377" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.810748 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc8867f-cb56-47ad-9d08-a25feca678fc" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.810753 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc8867f-cb56-47ad-9d08-a25feca678fc" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.810765 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ae5264-a3f4-4f05-b7ff-942b182ee6e6" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811029 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ae5264-a3f4-4f05-b7ff-942b182ee6e6" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.811040 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec70d32-3fdc-410f-9d9d-9b108e079cfe" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811046 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec70d32-3fdc-410f-9d9d-9b108e079cfe" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.811055 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811060 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811228 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec70d32-3fdc-410f-9d9d-9b108e079cfe" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811243 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon-log" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811252 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ae5264-a3f4-4f05-b7ff-942b182ee6e6" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811260 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811274 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc8867f-cb56-47ad-9d08-a25feca678fc" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811282 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44b5c9c-2c44-4e46-a14f-a8a0c93781d3" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811292 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="20264fab-dfb6-4e8c-90c3-755f6877b798" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811298 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7703d71c-4ee9-4495-ab74-0a76c148d377" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811862 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.814305 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.814307 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.814845 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rd8q2" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.834384 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-64b8m"] Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.961154 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-config-data\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.961243 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m25dj\" (UniqueName: \"kubernetes.io/projected/c88334ec-64f6-41ba-aee5-d5323e8c0c25-kube-api-access-m25dj\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.961318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-scripts\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.961500 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.063640 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-config-data\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.063708 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m25dj\" (UniqueName: \"kubernetes.io/projected/c88334ec-64f6-41ba-aee5-d5323e8c0c25-kube-api-access-m25dj\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.063775 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-scripts\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.063824 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.071088 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.072612 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-scripts\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.072625 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-config-data\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.082621 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m25dj\" (UniqueName: \"kubernetes.io/projected/c88334ec-64f6-41ba-aee5-d5323e8c0c25-kube-api-access-m25dj\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.095650 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.166732 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.168115 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5966846f96-hbrsw"] Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.168314 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5966846f96-hbrsw" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-api" containerID="cri-o://8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f" gracePeriod=30 Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.175850 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5966846f96-hbrsw" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-httpd" containerID="cri-o://44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09" gracePeriod=30 Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.248293 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.261274 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.666022 4932 generic.go:334] "Generic (PLEG): container finished" podID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerID="44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09" exitCode=0 Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.667088 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5966846f96-hbrsw" event={"ID":"fb1c0405-2770-4a03-ba51-c78005d57ad9","Type":"ContainerDied","Data":"44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09"} Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.358955 4932 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod5bd90883-79db-4903-87ab-828b9608f9fa"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod5bd90883-79db-4903-87ab-828b9608f9fa] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5bd90883_79db_4903_87ab_828b9608f9fa.slice" Feb 18 19:55:01 crc kubenswrapper[4932]: E0218 19:55:01.359437 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod5bd90883-79db-4903-87ab-828b9608f9fa] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod5bd90883-79db-4903-87ab-828b9608f9fa] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5bd90883_79db_4903_87ab_828b9608f9fa.slice" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.384105 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.687160 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.841627 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.860533 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.893767 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.894958 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.900874 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.910460 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.011777 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.022459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzbg4\" (UniqueName: \"kubernetes.io/projected/c7feb603-1c6f-423f-979e-840070052a6f-kube-api-access-wzbg4\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.022546 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7feb603-1c6f-423f-979e-840070052a6f-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.022578 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7feb603-1c6f-423f-979e-840070052a6f-logs\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.022618 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7feb603-1c6f-423f-979e-840070052a6f-config-data\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124075 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdllf\" (UniqueName: \"kubernetes.io/projected/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-kube-api-access-wdllf\") pod \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124320 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-etc-machine-id\") pod \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124344 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-logs\") pod \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124427 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-scripts\") pod \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124490 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data\") pod \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124538 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data-custom\") pod \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124591 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-combined-ca-bundle\") pod \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124822 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7feb603-1c6f-423f-979e-840070052a6f-config-data\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124906 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzbg4\" (UniqueName: \"kubernetes.io/projected/c7feb603-1c6f-423f-979e-840070052a6f-kube-api-access-wzbg4\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124964 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7feb603-1c6f-423f-979e-840070052a6f-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124990 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7feb603-1c6f-423f-979e-840070052a6f-logs\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.126219 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7feb603-1c6f-423f-979e-840070052a6f-logs\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.127330 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "30bd9d4f-e84f-4320-9057-80d3d53f7ebb" (UID: "30bd9d4f-e84f-4320-9057-80d3d53f7ebb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.129596 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-logs" (OuterVolumeSpecName: "logs") pod "30bd9d4f-e84f-4320-9057-80d3d53f7ebb" (UID: "30bd9d4f-e84f-4320-9057-80d3d53f7ebb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.131570 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "30bd9d4f-e84f-4320-9057-80d3d53f7ebb" (UID: "30bd9d4f-e84f-4320-9057-80d3d53f7ebb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.137430 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7feb603-1c6f-423f-979e-840070052a6f-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.137569 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-kube-api-access-wdllf" (OuterVolumeSpecName: "kube-api-access-wdllf") pod "30bd9d4f-e84f-4320-9057-80d3d53f7ebb" (UID: "30bd9d4f-e84f-4320-9057-80d3d53f7ebb"). InnerVolumeSpecName "kube-api-access-wdllf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.138083 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7feb603-1c6f-423f-979e-840070052a6f-config-data\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.140433 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-scripts" (OuterVolumeSpecName: "scripts") pod "30bd9d4f-e84f-4320-9057-80d3d53f7ebb" (UID: "30bd9d4f-e84f-4320-9057-80d3d53f7ebb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.152978 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzbg4\" (UniqueName: \"kubernetes.io/projected/c7feb603-1c6f-423f-979e-840070052a6f-kube-api-access-wzbg4\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.169497 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30bd9d4f-e84f-4320-9057-80d3d53f7ebb" (UID: "30bd9d4f-e84f-4320-9057-80d3d53f7ebb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.223205 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data" (OuterVolumeSpecName: "config-data") pod "30bd9d4f-e84f-4320-9057-80d3d53f7ebb" (UID: "30bd9d4f-e84f-4320-9057-80d3d53f7ebb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.225963 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.227118 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdllf\" (UniqueName: \"kubernetes.io/projected/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-kube-api-access-wdllf\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.227138 4932 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.227149 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.227158 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.227167 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.227187 4932 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.227195 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.288753 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-64b8m"] Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.691537 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.700561 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerStarted","Data":"0f0525a788d09ab7dbc5b7179e97321ba422fbd170f5197c2447f43debd2c5c7"} Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.700724 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-central-agent" containerID="cri-o://3d2064737241a9f7bf6098cc357b389e019add37b83420b1cfe158e700514b8a" gracePeriod=30 Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.700993 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.701258 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="proxy-httpd" containerID="cri-o://0f0525a788d09ab7dbc5b7179e97321ba422fbd170f5197c2447f43debd2c5c7" gracePeriod=30 Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.701298 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="sg-core" containerID="cri-o://ce107b52c8f3445b9ec76a561859b862dc62dc9976d0d2ac59adfc9855cf6bd0" gracePeriod=30 Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.701330 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-notification-agent" containerID="cri-o://06ac8abe3739afaf69ebc58f3baaf3e27bfecb005ce000a65602459c58cfcb6e" gracePeriod=30 Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.705233 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-64b8m" event={"ID":"c88334ec-64f6-41ba-aee5-d5323e8c0c25","Type":"ContainerStarted","Data":"956c5d87f252b7fd42789858690e26fc0ca0b5a7be8c4cc152d63b1bddc300e7"} Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.708505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30bd9d4f-e84f-4320-9057-80d3d53f7ebb","Type":"ContainerDied","Data":"00c41dbe58ad3dc460e41a4f8f86809ef9204f330e62756cf3eed317cf475042"} Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.708544 4932 scope.go:117] "RemoveContainer" containerID="ef6f599a3d418b9b543c98edcdf9f0f0c968498f5968cc9b3a5e3260ba0ced73" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.708664 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:55:02 crc kubenswrapper[4932]: W0218 19:55:02.712966 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7feb603_1c6f_423f_979e_840070052a6f.slice/crio-08aaa0a6f7ecaed6f00a69690e202523e24641b077b47ae363c813b7af5b54f3 WatchSource:0}: Error finding container 08aaa0a6f7ecaed6f00a69690e202523e24641b077b47ae363c813b7af5b54f3: Status 404 returned error can't find the container with id 08aaa0a6f7ecaed6f00a69690e202523e24641b077b47ae363c813b7af5b54f3 Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.716932 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5","Type":"ContainerStarted","Data":"eced2922a93e5d472a9c76467bf01b45bd012e755d967ed8252968ef17137a74"} Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.729572 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.540934203 podStartE2EDuration="12.729557008s" podCreationTimestamp="2026-02-18 19:54:50 +0000 UTC" firstStartedPulling="2026-02-18 19:54:51.63064189 +0000 UTC m=+1255.212596735" lastFinishedPulling="2026-02-18 19:55:01.819264695 +0000 UTC m=+1265.401219540" observedRunningTime="2026-02-18 19:55:02.727958498 +0000 UTC m=+1266.309913353" watchObservedRunningTime="2026-02-18 19:55:02.729557008 +0000 UTC m=+1266.311511853" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.756534 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.098106433 podStartE2EDuration="16.756516246s" podCreationTimestamp="2026-02-18 19:54:46 +0000 UTC" firstStartedPulling="2026-02-18 19:54:47.160792371 +0000 UTC m=+1250.742747226" lastFinishedPulling="2026-02-18 19:55:01.819202194 +0000 UTC m=+1265.401157039" observedRunningTime="2026-02-18 19:55:02.749845341 +0000 UTC m=+1266.331800196" watchObservedRunningTime="2026-02-18 19:55:02.756516246 +0000 UTC m=+1266.338471091" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.895433 4932 scope.go:117] "RemoveContainer" containerID="b586d7053bf31a7678ef91de08a0a0dd40541c6b68d24d82d104e9ca9533195b" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.913558 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.921386 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.937316 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:55:02 crc kubenswrapper[4932]: E0218 19:55:02.937696 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.937712 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api" Feb 18 19:55:02 crc kubenswrapper[4932]: E0218 19:55:02.937737 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api-log" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.937744 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api-log" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.937909 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api-log" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.937936 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.938969 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.940915 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.941071 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.941185 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.956904 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.048482 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47232719-278b-4937-b20a-df608aa754ff-etc-machine-id\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.048567 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.048626 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47232719-278b-4937-b20a-df608aa754ff-logs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.048752 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szp8d\" (UniqueName: \"kubernetes.io/projected/47232719-278b-4937-b20a-df608aa754ff-kube-api-access-szp8d\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.048819 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-scripts\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.048894 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.048946 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-public-tls-certs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.049131 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-config-data\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.049282 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-config-data-custom\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151685 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47232719-278b-4937-b20a-df608aa754ff-etc-machine-id\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151755 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151798 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47232719-278b-4937-b20a-df608aa754ff-logs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151839 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szp8d\" (UniqueName: \"kubernetes.io/projected/47232719-278b-4937-b20a-df608aa754ff-kube-api-access-szp8d\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151895 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-scripts\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151921 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151949 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-public-tls-certs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151996 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-config-data\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.152037 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-config-data-custom\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.153114 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47232719-278b-4937-b20a-df608aa754ff-etc-machine-id\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.155954 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47232719-278b-4937-b20a-df608aa754ff-logs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.158316 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-config-data-custom\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.158918 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-scripts\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.159236 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.160231 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.161620 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-config-data\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.164436 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-public-tls-certs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.169499 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szp8d\" (UniqueName: \"kubernetes.io/projected/47232719-278b-4937-b20a-df608aa754ff-kube-api-access-szp8d\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.189510 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" path="/var/lib/kubelet/pods/30bd9d4f-e84f-4320-9057-80d3d53f7ebb/volumes" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.190382 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" path="/var/lib/kubelet/pods/5bd90883-79db-4903-87ab-828b9608f9fa/volumes" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.280468 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.733018 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.739454 4932 generic.go:334] "Generic (PLEG): container finished" podID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerID="8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f" exitCode=0 Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.739637 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5966846f96-hbrsw" event={"ID":"fb1c0405-2770-4a03-ba51-c78005d57ad9","Type":"ContainerDied","Data":"8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.739840 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5966846f96-hbrsw" event={"ID":"fb1c0405-2770-4a03-ba51-c78005d57ad9","Type":"ContainerDied","Data":"b0f1a3c159b5f59fb68d5b1503c08f8c96ed4a7d57c2077fe1e9116b9b2fbf3b"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.739932 4932 scope.go:117] "RemoveContainer" containerID="44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.740103 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744149 4932 generic.go:334] "Generic (PLEG): container finished" podID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerID="0f0525a788d09ab7dbc5b7179e97321ba422fbd170f5197c2447f43debd2c5c7" exitCode=0 Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744493 4932 generic.go:334] "Generic (PLEG): container finished" podID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerID="ce107b52c8f3445b9ec76a561859b862dc62dc9976d0d2ac59adfc9855cf6bd0" exitCode=2 Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744579 4932 generic.go:334] "Generic (PLEG): container finished" podID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerID="06ac8abe3739afaf69ebc58f3baaf3e27bfecb005ce000a65602459c58cfcb6e" exitCode=0 Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744639 4932 generic.go:334] "Generic (PLEG): container finished" podID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerID="3d2064737241a9f7bf6098cc357b389e019add37b83420b1cfe158e700514b8a" exitCode=0 Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744718 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerDied","Data":"0f0525a788d09ab7dbc5b7179e97321ba422fbd170f5197c2447f43debd2c5c7"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744785 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerDied","Data":"ce107b52c8f3445b9ec76a561859b862dc62dc9976d0d2ac59adfc9855cf6bd0"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744848 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerDied","Data":"06ac8abe3739afaf69ebc58f3baaf3e27bfecb005ce000a65602459c58cfcb6e"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744910 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerDied","Data":"3d2064737241a9f7bf6098cc357b389e019add37b83420b1cfe158e700514b8a"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.757990 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"c7feb603-1c6f-423f-979e-840070052a6f","Type":"ContainerStarted","Data":"21a693b21bd92f9ee466a319d872d778fe453f52a58b874af7c6007ff9102392"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.758030 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"c7feb603-1c6f-423f-979e-840070052a6f","Type":"ContainerStarted","Data":"08aaa0a6f7ecaed6f00a69690e202523e24641b077b47ae363c813b7af5b54f3"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.779787 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.7797712089999997 podStartE2EDuration="2.779771209s" podCreationTimestamp="2026-02-18 19:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:03.77537176 +0000 UTC m=+1267.357326605" watchObservedRunningTime="2026-02-18 19:55:03.779771209 +0000 UTC m=+1267.361726054" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.785110 4932 scope.go:117] "RemoveContainer" containerID="8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.822759 4932 scope.go:117] "RemoveContainer" containerID="44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09" Feb 18 19:55:03 crc kubenswrapper[4932]: E0218 19:55:03.824306 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09\": container with ID starting with 44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09 not found: ID does not exist" containerID="44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.824419 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09"} err="failed to get container status \"44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09\": rpc error: code = NotFound desc = could not find container \"44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09\": container with ID starting with 44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09 not found: ID does not exist" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.824516 4932 scope.go:117] "RemoveContainer" containerID="8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f" Feb 18 19:55:03 crc kubenswrapper[4932]: E0218 19:55:03.824945 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f\": container with ID starting with 8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f not found: ID does not exist" containerID="8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.824993 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f"} err="failed to get container status \"8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f\": rpc error: code = NotFound desc = could not find container \"8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f\": container with ID starting with 8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f not found: ID does not exist" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.831499 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.866107 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d54tn\" (UniqueName: \"kubernetes.io/projected/fb1c0405-2770-4a03-ba51-c78005d57ad9-kube-api-access-d54tn\") pod \"fb1c0405-2770-4a03-ba51-c78005d57ad9\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.866590 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-ovndb-tls-certs\") pod \"fb1c0405-2770-4a03-ba51-c78005d57ad9\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.866729 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-httpd-config\") pod \"fb1c0405-2770-4a03-ba51-c78005d57ad9\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.866803 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-config\") pod \"fb1c0405-2770-4a03-ba51-c78005d57ad9\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.866952 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-combined-ca-bundle\") pod \"fb1c0405-2770-4a03-ba51-c78005d57ad9\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.874039 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb1c0405-2770-4a03-ba51-c78005d57ad9-kube-api-access-d54tn" (OuterVolumeSpecName: "kube-api-access-d54tn") pod "fb1c0405-2770-4a03-ba51-c78005d57ad9" (UID: "fb1c0405-2770-4a03-ba51-c78005d57ad9"). InnerVolumeSpecName "kube-api-access-d54tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.874857 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "fb1c0405-2770-4a03-ba51-c78005d57ad9" (UID: "fb1c0405-2770-4a03-ba51-c78005d57ad9"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.940239 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb1c0405-2770-4a03-ba51-c78005d57ad9" (UID: "fb1c0405-2770-4a03-ba51-c78005d57ad9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.959100 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "fb1c0405-2770-4a03-ba51-c78005d57ad9" (UID: "fb1c0405-2770-4a03-ba51-c78005d57ad9"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.969297 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.969318 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d54tn\" (UniqueName: \"kubernetes.io/projected/fb1c0405-2770-4a03-ba51-c78005d57ad9-kube-api-access-d54tn\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.969331 4932 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.969340 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.983782 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-config" (OuterVolumeSpecName: "config") pod "fb1c0405-2770-4a03-ba51-c78005d57ad9" (UID: "fb1c0405-2770-4a03-ba51-c78005d57ad9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.079275 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5966846f96-hbrsw"] Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.084667 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.089390 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5966846f96-hbrsw"] Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.168559 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.287678 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-log-httpd\") pod \"42f96153-201b-4efb-952d-ec27dcbd8c0c\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.287735 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tm9j\" (UniqueName: \"kubernetes.io/projected/42f96153-201b-4efb-952d-ec27dcbd8c0c-kube-api-access-8tm9j\") pod \"42f96153-201b-4efb-952d-ec27dcbd8c0c\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.287809 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-combined-ca-bundle\") pod \"42f96153-201b-4efb-952d-ec27dcbd8c0c\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.287840 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-run-httpd\") pod \"42f96153-201b-4efb-952d-ec27dcbd8c0c\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.287909 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-sg-core-conf-yaml\") pod \"42f96153-201b-4efb-952d-ec27dcbd8c0c\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.287955 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-config-data\") pod \"42f96153-201b-4efb-952d-ec27dcbd8c0c\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.288016 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-scripts\") pod \"42f96153-201b-4efb-952d-ec27dcbd8c0c\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.288972 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "42f96153-201b-4efb-952d-ec27dcbd8c0c" (UID: "42f96153-201b-4efb-952d-ec27dcbd8c0c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.289399 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "42f96153-201b-4efb-952d-ec27dcbd8c0c" (UID: "42f96153-201b-4efb-952d-ec27dcbd8c0c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.294121 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f96153-201b-4efb-952d-ec27dcbd8c0c-kube-api-access-8tm9j" (OuterVolumeSpecName: "kube-api-access-8tm9j") pod "42f96153-201b-4efb-952d-ec27dcbd8c0c" (UID: "42f96153-201b-4efb-952d-ec27dcbd8c0c"). InnerVolumeSpecName "kube-api-access-8tm9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.297290 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-scripts" (OuterVolumeSpecName: "scripts") pod "42f96153-201b-4efb-952d-ec27dcbd8c0c" (UID: "42f96153-201b-4efb-952d-ec27dcbd8c0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.315533 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "42f96153-201b-4efb-952d-ec27dcbd8c0c" (UID: "42f96153-201b-4efb-952d-ec27dcbd8c0c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.372822 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42f96153-201b-4efb-952d-ec27dcbd8c0c" (UID: "42f96153-201b-4efb-952d-ec27dcbd8c0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.395123 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.395158 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.395181 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.395191 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tm9j\" (UniqueName: \"kubernetes.io/projected/42f96153-201b-4efb-952d-ec27dcbd8c0c-kube-api-access-8tm9j\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.395202 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.395210 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.411085 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-config-data" (OuterVolumeSpecName: "config-data") pod "42f96153-201b-4efb-952d-ec27dcbd8c0c" (UID: "42f96153-201b-4efb-952d-ec27dcbd8c0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.496339 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.791563 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"47232719-278b-4937-b20a-df608aa754ff","Type":"ContainerStarted","Data":"55666644405d724e92eb66ca6ff0a5a0536ce22f739acb31579e42ce03c8c6dd"} Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.791924 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"47232719-278b-4937-b20a-df608aa754ff","Type":"ContainerStarted","Data":"11091381623a887fe17ca074aad9c76b9bf435944dbf795a455c02b8aed96137"} Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.820167 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.821207 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerDied","Data":"bb80acf58868f86869a2edd8ebddc1372e30bf85bb8346fbc78e3b03f8adb9d4"} Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.821265 4932 scope.go:117] "RemoveContainer" containerID="0f0525a788d09ab7dbc5b7179e97321ba422fbd170f5197c2447f43debd2c5c7" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.904499 4932 scope.go:117] "RemoveContainer" containerID="ce107b52c8f3445b9ec76a561859b862dc62dc9976d0d2ac59adfc9855cf6bd0" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.905757 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.921109 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.934886 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:04 crc kubenswrapper[4932]: E0218 19:55:04.935398 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-api" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935420 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-api" Feb 18 19:55:04 crc kubenswrapper[4932]: E0218 19:55:04.935434 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="sg-core" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935445 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="sg-core" Feb 18 19:55:04 crc kubenswrapper[4932]: E0218 19:55:04.935471 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-notification-agent" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935477 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-notification-agent" Feb 18 19:55:04 crc kubenswrapper[4932]: E0218 19:55:04.935487 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="proxy-httpd" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935493 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="proxy-httpd" Feb 18 19:55:04 crc kubenswrapper[4932]: E0218 19:55:04.935503 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-httpd" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935509 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-httpd" Feb 18 19:55:04 crc kubenswrapper[4932]: E0218 19:55:04.935530 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-central-agent" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935536 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-central-agent" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935733 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-central-agent" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935745 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="proxy-httpd" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935755 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-api" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935856 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-httpd" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935872 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="sg-core" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935882 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-notification-agent" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.937893 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.940582 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.940867 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.957277 4932 scope.go:117] "RemoveContainer" containerID="06ac8abe3739afaf69ebc58f3baaf3e27bfecb005ce000a65602459c58cfcb6e" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.960418 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.986066 4932 scope.go:117] "RemoveContainer" containerID="3d2064737241a9f7bf6098cc357b389e019add37b83420b1cfe158e700514b8a" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.134808 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7z7x\" (UniqueName: \"kubernetes.io/projected/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-kube-api-access-k7z7x\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.134902 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.134957 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-run-httpd\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.134975 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-scripts\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.135019 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.135049 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-config-data\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.135064 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-log-httpd\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.201446 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" path="/var/lib/kubelet/pods/42f96153-201b-4efb-952d-ec27dcbd8c0c/volumes" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.202131 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" path="/var/lib/kubelet/pods/fb1c0405-2770-4a03-ba51-c78005d57ad9/volumes" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.236566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7z7x\" (UniqueName: \"kubernetes.io/projected/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-kube-api-access-k7z7x\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.236662 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.236703 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-run-httpd\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.236722 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-scripts\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.236751 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.236777 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-config-data\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.236791 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-log-httpd\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.237232 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-log-httpd\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.237584 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-run-httpd\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.245244 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-scripts\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.252028 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.252738 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.263217 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-config-data\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.263783 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7z7x\" (UniqueName: \"kubernetes.io/projected/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-kube-api-access-k7z7x\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.268606 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.776148 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.832858 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"47232719-278b-4937-b20a-df608aa754ff","Type":"ContainerStarted","Data":"3e7e41a4426e9ada8ce8a7dd8e1993272ee9ec065c595c55451ded476c951f03"} Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.833038 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.838430 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerStarted","Data":"f4124030be51a32309b149f7b80243f14d8defbe91c31e12165acaf4898b489f"} Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.853949 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.853928692 podStartE2EDuration="3.853928692s" podCreationTimestamp="2026-02-18 19:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:05.848621751 +0000 UTC m=+1269.430576596" watchObservedRunningTime="2026-02-18 19:55:05.853928692 +0000 UTC m=+1269.435883537" Feb 18 19:55:06 crc kubenswrapper[4932]: I0218 19:55:06.074304 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:06 crc kubenswrapper[4932]: I0218 19:55:06.858515 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerStarted","Data":"2a8231a4749b7c28294044affca5243982acbc40847aed0b7aabf1f4dcec52be"} Feb 18 19:55:06 crc kubenswrapper[4932]: I0218 19:55:06.858900 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerStarted","Data":"169807715b64b948a02827aa86071857ef5eef4e02b77baa6a3f7849b7ff2633"} Feb 18 19:55:07 crc kubenswrapper[4932]: I0218 19:55:07.227261 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 18 19:55:07 crc kubenswrapper[4932]: I0218 19:55:07.740311 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:07 crc kubenswrapper[4932]: I0218 19:55:07.740530 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:07 crc kubenswrapper[4932]: I0218 19:55:07.741508 4932 scope.go:117] "RemoveContainer" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" Feb 18 19:55:07 crc kubenswrapper[4932]: I0218 19:55:07.875328 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerStarted","Data":"5543bc0c607aa54b7d22bbd948e592ac89d47c26384cf6a11fc3e926ffc9bccb"} Feb 18 19:55:08 crc kubenswrapper[4932]: I0218 19:55:08.889635 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerStarted","Data":"fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef"} Feb 18 19:55:12 crc kubenswrapper[4932]: I0218 19:55:12.227268 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 18 19:55:12 crc kubenswrapper[4932]: I0218 19:55:12.257930 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 18 19:55:12 crc kubenswrapper[4932]: I0218 19:55:12.960720 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.463834 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.963458 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-64b8m" event={"ID":"c88334ec-64f6-41ba-aee5-d5323e8c0c25","Type":"ContainerStarted","Data":"4e7866a2ddd0a42f76d440fa6b1c16f63d3f4f13968f3f538f0dc810522b826b"} Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.965726 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerStarted","Data":"4eac274794eddf289d34407897e2996096b124daf0d65b5d1aef19150d4661b0"} Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.965872 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-central-agent" containerID="cri-o://169807715b64b948a02827aa86071857ef5eef4e02b77baa6a3f7849b7ff2633" gracePeriod=30 Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.965916 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.965957 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="proxy-httpd" containerID="cri-o://4eac274794eddf289d34407897e2996096b124daf0d65b5d1aef19150d4661b0" gracePeriod=30 Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.965974 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-notification-agent" containerID="cri-o://2a8231a4749b7c28294044affca5243982acbc40847aed0b7aabf1f4dcec52be" gracePeriod=30 Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.965991 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="sg-core" containerID="cri-o://5543bc0c607aa54b7d22bbd948e592ac89d47c26384cf6a11fc3e926ffc9bccb" gracePeriod=30 Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.992065 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-64b8m" podStartSLOduration=5.268223185 podStartE2EDuration="18.992045882s" podCreationTimestamp="2026-02-18 19:54:58 +0000 UTC" firstStartedPulling="2026-02-18 19:55:02.32112472 +0000 UTC m=+1265.903079565" lastFinishedPulling="2026-02-18 19:55:16.044947407 +0000 UTC m=+1279.626902262" observedRunningTime="2026-02-18 19:55:16.982811753 +0000 UTC m=+1280.564766598" watchObservedRunningTime="2026-02-18 19:55:16.992045882 +0000 UTC m=+1280.574000737" Feb 18 19:55:17 crc kubenswrapper[4932]: I0218 19:55:17.021770 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.777816193 podStartE2EDuration="13.021744829s" podCreationTimestamp="2026-02-18 19:55:04 +0000 UTC" firstStartedPulling="2026-02-18 19:55:05.797445202 +0000 UTC m=+1269.379400047" lastFinishedPulling="2026-02-18 19:55:16.041373838 +0000 UTC m=+1279.623328683" observedRunningTime="2026-02-18 19:55:17.015909894 +0000 UTC m=+1280.597864749" watchObservedRunningTime="2026-02-18 19:55:17.021744829 +0000 UTC m=+1280.603699674" Feb 18 19:55:17 crc kubenswrapper[4932]: E0218 19:55:17.486505 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d92a28e_33fd_49cd_ba7e_1b12f1b4628b.slice/crio-conmon-169807715b64b948a02827aa86071857ef5eef4e02b77baa6a3f7849b7ff2633.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d92a28e_33fd_49cd_ba7e_1b12f1b4628b.slice/crio-169807715b64b948a02827aa86071857ef5eef4e02b77baa6a3f7849b7ff2633.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:55:17 crc kubenswrapper[4932]: I0218 19:55:17.740703 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:17 crc kubenswrapper[4932]: I0218 19:55:17.773702 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.003496 4932 generic.go:334] "Generic (PLEG): container finished" podID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerID="4eac274794eddf289d34407897e2996096b124daf0d65b5d1aef19150d4661b0" exitCode=0 Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.003540 4932 generic.go:334] "Generic (PLEG): container finished" podID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerID="5543bc0c607aa54b7d22bbd948e592ac89d47c26384cf6a11fc3e926ffc9bccb" exitCode=2 Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.003547 4932 generic.go:334] "Generic (PLEG): container finished" podID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerID="169807715b64b948a02827aa86071857ef5eef4e02b77baa6a3f7849b7ff2633" exitCode=0 Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.003982 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerDied","Data":"4eac274794eddf289d34407897e2996096b124daf0d65b5d1aef19150d4661b0"} Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.004089 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerDied","Data":"5543bc0c607aa54b7d22bbd948e592ac89d47c26384cf6a11fc3e926ffc9bccb"} Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.004156 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerDied","Data":"169807715b64b948a02827aa86071857ef5eef4e02b77baa6a3f7849b7ff2633"} Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.005247 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.059402 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.128249 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.025402 4932 generic.go:334] "Generic (PLEG): container finished" podID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerID="2a8231a4749b7c28294044affca5243982acbc40847aed0b7aabf1f4dcec52be" exitCode=0 Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.025466 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerDied","Data":"2a8231a4749b7c28294044affca5243982acbc40847aed0b7aabf1f4dcec52be"} Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.025783 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" containerID="cri-o://fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef" gracePeriod=30 Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.130633 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.218877 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-log-httpd\") pod \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.218934 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7z7x\" (UniqueName: \"kubernetes.io/projected/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-kube-api-access-k7z7x\") pod \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.218999 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-sg-core-conf-yaml\") pod \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.219122 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-run-httpd\") pod \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.219156 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-combined-ca-bundle\") pod \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.219295 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-scripts\") pod \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.219317 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-config-data\") pod \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.219660 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" (UID: "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.220283 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.220849 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" (UID: "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.230421 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-scripts" (OuterVolumeSpecName: "scripts") pod "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" (UID: "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.230542 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-kube-api-access-k7z7x" (OuterVolumeSpecName: "kube-api-access-k7z7x") pod "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" (UID: "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b"). InnerVolumeSpecName "kube-api-access-k7z7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.263642 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" (UID: "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.301626 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" (UID: "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.322103 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.322129 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.322140 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7z7x\" (UniqueName: \"kubernetes.io/projected/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-kube-api-access-k7z7x\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.322150 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.322158 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.331267 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-config-data" (OuterVolumeSpecName: "config-data") pod "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" (UID: "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.423935 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.040969 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerDied","Data":"f4124030be51a32309b149f7b80243f14d8defbe91c31e12165acaf4898b489f"} Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.041247 4932 scope.go:117] "RemoveContainer" containerID="4eac274794eddf289d34407897e2996096b124daf0d65b5d1aef19150d4661b0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.041052 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.065391 4932 scope.go:117] "RemoveContainer" containerID="5543bc0c607aa54b7d22bbd948e592ac89d47c26384cf6a11fc3e926ffc9bccb" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.086044 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.109584 4932 scope.go:117] "RemoveContainer" containerID="2a8231a4749b7c28294044affca5243982acbc40847aed0b7aabf1f4dcec52be" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.111348 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.135922 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:21 crc kubenswrapper[4932]: E0218 19:55:21.136469 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-notification-agent" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136494 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-notification-agent" Feb 18 19:55:21 crc kubenswrapper[4932]: E0218 19:55:21.136514 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="proxy-httpd" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136523 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="proxy-httpd" Feb 18 19:55:21 crc kubenswrapper[4932]: E0218 19:55:21.136545 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-central-agent" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136554 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-central-agent" Feb 18 19:55:21 crc kubenswrapper[4932]: E0218 19:55:21.136583 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="sg-core" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136591 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="sg-core" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136824 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="proxy-httpd" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136848 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-central-agent" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136867 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="sg-core" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136887 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-notification-agent" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.138977 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.148129 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.152912 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.153122 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.159455 4932 scope.go:117] "RemoveContainer" containerID="169807715b64b948a02827aa86071857ef5eef4e02b77baa6a3f7849b7ff2633" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.195276 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" path="/var/lib/kubelet/pods/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b/volumes" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.249729 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-config-data\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.249808 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-log-httpd\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.249834 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.250661 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-scripts\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.250694 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-run-httpd\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.250775 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.250796 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlbvv\" (UniqueName: \"kubernetes.io/projected/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-kube-api-access-dlbvv\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352010 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352066 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlbvv\" (UniqueName: \"kubernetes.io/projected/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-kube-api-access-dlbvv\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352097 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-config-data\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352129 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-log-httpd\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352150 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352253 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-scripts\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352285 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-run-httpd\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352867 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-run-httpd\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.353295 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-log-httpd\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.359374 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.359554 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.384356 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-scripts\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.391582 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-config-data\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.397288 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlbvv\" (UniqueName: \"kubernetes.io/projected/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-kube-api-access-dlbvv\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.462737 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.927040 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:21 crc kubenswrapper[4932]: W0218 19:55:21.929192 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b53bc70_03d1_4b04_8b5e_bf135aed16bc.slice/crio-645c7a2022539be3402561398dbaa367b877bd373e87043a2a883b26e04638ba WatchSource:0}: Error finding container 645c7a2022539be3402561398dbaa367b877bd373e87043a2a883b26e04638ba: Status 404 returned error can't find the container with id 645c7a2022539be3402561398dbaa367b877bd373e87043a2a883b26e04638ba Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.932284 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.058766 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerStarted","Data":"645c7a2022539be3402561398dbaa367b877bd373e87043a2a883b26e04638ba"} Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.740949 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.786794 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-custom-prometheus-ca\") pod \"0882c686-1b07-4ac7-a6be-148eff7faa19\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.786889 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-config-data\") pod \"0882c686-1b07-4ac7-a6be-148eff7faa19\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.786974 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0882c686-1b07-4ac7-a6be-148eff7faa19-logs\") pod \"0882c686-1b07-4ac7-a6be-148eff7faa19\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.786994 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tb6k\" (UniqueName: \"kubernetes.io/projected/0882c686-1b07-4ac7-a6be-148eff7faa19-kube-api-access-9tb6k\") pod \"0882c686-1b07-4ac7-a6be-148eff7faa19\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.787084 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-combined-ca-bundle\") pod \"0882c686-1b07-4ac7-a6be-148eff7faa19\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.788263 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0882c686-1b07-4ac7-a6be-148eff7faa19-logs" (OuterVolumeSpecName: "logs") pod "0882c686-1b07-4ac7-a6be-148eff7faa19" (UID: "0882c686-1b07-4ac7-a6be-148eff7faa19"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.797353 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0882c686-1b07-4ac7-a6be-148eff7faa19-kube-api-access-9tb6k" (OuterVolumeSpecName: "kube-api-access-9tb6k") pod "0882c686-1b07-4ac7-a6be-148eff7faa19" (UID: "0882c686-1b07-4ac7-a6be-148eff7faa19"). InnerVolumeSpecName "kube-api-access-9tb6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.823722 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "0882c686-1b07-4ac7-a6be-148eff7faa19" (UID: "0882c686-1b07-4ac7-a6be-148eff7faa19"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.841357 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0882c686-1b07-4ac7-a6be-148eff7faa19" (UID: "0882c686-1b07-4ac7-a6be-148eff7faa19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.889309 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-config-data" (OuterVolumeSpecName: "config-data") pod "0882c686-1b07-4ac7-a6be-148eff7faa19" (UID: "0882c686-1b07-4ac7-a6be-148eff7faa19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.890209 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.890239 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0882c686-1b07-4ac7-a6be-148eff7faa19-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.890271 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tb6k\" (UniqueName: \"kubernetes.io/projected/0882c686-1b07-4ac7-a6be-148eff7faa19-kube-api-access-9tb6k\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.890283 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.890292 4932 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.069520 4932 generic.go:334] "Generic (PLEG): container finished" podID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerID="fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef" exitCode=0 Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.069626 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.069643 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerDied","Data":"fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef"} Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.070573 4932 scope.go:117] "RemoveContainer" containerID="fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.070774 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerDied","Data":"04f5dff2832c6635da78aa840490b39a4906ea50c8d89ba21f85a3c5474f7c9b"} Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.075928 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerStarted","Data":"7acb15159d4595ba439a2fb0bc1f02945c077e1367284777296d41f6db4c2909"} Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.075974 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerStarted","Data":"352d9d060068cebd5ee94ed059873c579e4c314ed02a4f51b120c4e46c462b6a"} Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.108307 4932 scope.go:117] "RemoveContainer" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.124471 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.136059 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.145050 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:55:23 crc kubenswrapper[4932]: E0218 19:55:23.145525 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.145541 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: E0218 19:55:23.145552 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.145558 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: E0218 19:55:23.145589 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.145596 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.145757 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.145775 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.145785 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.152942 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.153054 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.156620 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.170513 4932 scope.go:117] "RemoveContainer" containerID="fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef" Feb 18 19:55:23 crc kubenswrapper[4932]: E0218 19:55:23.175302 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef\": container with ID starting with fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef not found: ID does not exist" containerID="fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.175344 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef"} err="failed to get container status \"fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef\": rpc error: code = NotFound desc = could not find container \"fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef\": container with ID starting with fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef not found: ID does not exist" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.175372 4932 scope.go:117] "RemoveContainer" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" Feb 18 19:55:23 crc kubenswrapper[4932]: E0218 19:55:23.177558 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0\": container with ID starting with ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0 not found: ID does not exist" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.177619 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0"} err="failed to get container status \"ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0\": rpc error: code = NotFound desc = could not find container \"ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0\": container with ID starting with ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0 not found: ID does not exist" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.196548 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2de1e0e-9137-47d7-ab62-ae47f646f26e-logs\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.196642 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68b59\" (UniqueName: \"kubernetes.io/projected/b2de1e0e-9137-47d7-ab62-ae47f646f26e-kube-api-access-68b59\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.196699 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.196725 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.196743 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.198082 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" path="/var/lib/kubelet/pods/0882c686-1b07-4ac7-a6be-148eff7faa19/volumes" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.298992 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.299051 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.299301 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2de1e0e-9137-47d7-ab62-ae47f646f26e-logs\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.299387 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68b59\" (UniqueName: \"kubernetes.io/projected/b2de1e0e-9137-47d7-ab62-ae47f646f26e-kube-api-access-68b59\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.299535 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.301794 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2de1e0e-9137-47d7-ab62-ae47f646f26e-logs\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.304911 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.305007 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.308785 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.321910 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68b59\" (UniqueName: \"kubernetes.io/projected/b2de1e0e-9137-47d7-ab62-ae47f646f26e-kube-api-access-68b59\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.518260 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: W0218 19:55:23.987477 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2de1e0e_9137_47d7_ab62_ae47f646f26e.slice/crio-b3e0f2c928970f850de581a11705dd0da0f06aeadc7544afa2836c9d273eb595 WatchSource:0}: Error finding container b3e0f2c928970f850de581a11705dd0da0f06aeadc7544afa2836c9d273eb595: Status 404 returned error can't find the container with id b3e0f2c928970f850de581a11705dd0da0f06aeadc7544afa2836c9d273eb595 Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.988671 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:55:24 crc kubenswrapper[4932]: I0218 19:55:24.089689 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerStarted","Data":"ad61282ffc3a72810e8f042075ed0da414d8b82de84a0ac7c8a5f7db1e3ef9a9"} Feb 18 19:55:24 crc kubenswrapper[4932]: I0218 19:55:24.091007 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b2de1e0e-9137-47d7-ab62-ae47f646f26e","Type":"ContainerStarted","Data":"b3e0f2c928970f850de581a11705dd0da0f06aeadc7544afa2836c9d273eb595"} Feb 18 19:55:25 crc kubenswrapper[4932]: I0218 19:55:25.104607 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b2de1e0e-9137-47d7-ab62-ae47f646f26e","Type":"ContainerStarted","Data":"c3fe695810a82bae333705df40e3a822375c89eaf4ee576f93fc93a553eaaf04"} Feb 18 19:55:25 crc kubenswrapper[4932]: I0218 19:55:25.123722 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.123702261 podStartE2EDuration="2.123702261s" podCreationTimestamp="2026-02-18 19:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:25.120581064 +0000 UTC m=+1288.702535909" watchObservedRunningTime="2026-02-18 19:55:25.123702261 +0000 UTC m=+1288.705657116" Feb 18 19:55:26 crc kubenswrapper[4932]: I0218 19:55:26.116565 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerStarted","Data":"004980b5d8abc3f9855902198b1efeb131117fc949d4f8c4b5c8b8b2d74e77fe"} Feb 18 19:55:26 crc kubenswrapper[4932]: I0218 19:55:26.142459 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.557485598 podStartE2EDuration="5.142437133s" podCreationTimestamp="2026-02-18 19:55:21 +0000 UTC" firstStartedPulling="2026-02-18 19:55:21.932083569 +0000 UTC m=+1285.514038414" lastFinishedPulling="2026-02-18 19:55:25.517035094 +0000 UTC m=+1289.098989949" observedRunningTime="2026-02-18 19:55:26.135305286 +0000 UTC m=+1289.717260131" watchObservedRunningTime="2026-02-18 19:55:26.142437133 +0000 UTC m=+1289.724391978" Feb 18 19:55:27 crc kubenswrapper[4932]: I0218 19:55:27.127811 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:55:27 crc kubenswrapper[4932]: I0218 19:55:27.605632 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:55:27 crc kubenswrapper[4932]: I0218 19:55:27.605995 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:55:29 crc kubenswrapper[4932]: I0218 19:55:29.877822 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:55:29 crc kubenswrapper[4932]: I0218 19:55:29.878358 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-log" containerID="cri-o://58449f068ea443fd840aa17c5a640ee0e5ae861f046a6ea06594d638db518b63" gracePeriod=30 Feb 18 19:55:29 crc kubenswrapper[4932]: I0218 19:55:29.878429 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-httpd" containerID="cri-o://fd324d05ae668c3f684220e361c41b6ff46379462c08ea7c413014fe4a371e37" gracePeriod=30 Feb 18 19:55:30 crc kubenswrapper[4932]: I0218 19:55:30.160059 4932 generic.go:334] "Generic (PLEG): container finished" podID="67750e31-ed62-4908-9b56-3a46be936224" containerID="58449f068ea443fd840aa17c5a640ee0e5ae861f046a6ea06594d638db518b63" exitCode=143 Feb 18 19:55:30 crc kubenswrapper[4932]: I0218 19:55:30.160116 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67750e31-ed62-4908-9b56-3a46be936224","Type":"ContainerDied","Data":"58449f068ea443fd840aa17c5a640ee0e5ae861f046a6ea06594d638db518b63"} Feb 18 19:55:30 crc kubenswrapper[4932]: I0218 19:55:30.525399 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:30 crc kubenswrapper[4932]: I0218 19:55:30.525661 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-central-agent" containerID="cri-o://352d9d060068cebd5ee94ed059873c579e4c314ed02a4f51b120c4e46c462b6a" gracePeriod=30 Feb 18 19:55:30 crc kubenswrapper[4932]: I0218 19:55:30.525721 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="sg-core" containerID="cri-o://ad61282ffc3a72810e8f042075ed0da414d8b82de84a0ac7c8a5f7db1e3ef9a9" gracePeriod=30 Feb 18 19:55:30 crc kubenswrapper[4932]: I0218 19:55:30.525769 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="proxy-httpd" containerID="cri-o://004980b5d8abc3f9855902198b1efeb131117fc949d4f8c4b5c8b8b2d74e77fe" gracePeriod=30 Feb 18 19:55:30 crc kubenswrapper[4932]: I0218 19:55:30.525779 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-notification-agent" containerID="cri-o://7acb15159d4595ba439a2fb0bc1f02945c077e1367284777296d41f6db4c2909" gracePeriod=30 Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.160925 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.161396 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-log" containerID="cri-o://ec4505e85a78c60e725484af01a4d51a03ebf66c4a5ad9b030f60b812e85e4e3" gracePeriod=30 Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.161950 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-httpd" containerID="cri-o://99077386a5dc37e2145b33681651b019f28beed715374edd046c2366a76b2af6" gracePeriod=30 Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.187009 4932 generic.go:334] "Generic (PLEG): container finished" podID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerID="004980b5d8abc3f9855902198b1efeb131117fc949d4f8c4b5c8b8b2d74e77fe" exitCode=0 Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.187036 4932 generic.go:334] "Generic (PLEG): container finished" podID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerID="ad61282ffc3a72810e8f042075ed0da414d8b82de84a0ac7c8a5f7db1e3ef9a9" exitCode=2 Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.192522 4932 generic.go:334] "Generic (PLEG): container finished" podID="67750e31-ed62-4908-9b56-3a46be936224" containerID="fd324d05ae668c3f684220e361c41b6ff46379462c08ea7c413014fe4a371e37" exitCode=0 Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.198041 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerDied","Data":"004980b5d8abc3f9855902198b1efeb131117fc949d4f8c4b5c8b8b2d74e77fe"} Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.198078 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerDied","Data":"ad61282ffc3a72810e8f042075ed0da414d8b82de84a0ac7c8a5f7db1e3ef9a9"} Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.198089 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67750e31-ed62-4908-9b56-3a46be936224","Type":"ContainerDied","Data":"fd324d05ae668c3f684220e361c41b6ff46379462c08ea7c413014fe4a371e37"} Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.363768 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.450646 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-logs\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.450921 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-combined-ca-bundle\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.450948 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-scripts\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.450987 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.451302 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-logs" (OuterVolumeSpecName: "logs") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.451794 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bxxv\" (UniqueName: \"kubernetes.io/projected/67750e31-ed62-4908-9b56-3a46be936224-kube-api-access-2bxxv\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.451826 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-config-data\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.451885 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-public-tls-certs\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.451908 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-httpd-run\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.452399 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.452566 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.465404 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.468898 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-scripts" (OuterVolumeSpecName: "scripts") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.470380 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67750e31-ed62-4908-9b56-3a46be936224-kube-api-access-2bxxv" (OuterVolumeSpecName: "kube-api-access-2bxxv") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "kube-api-access-2bxxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.503353 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.530658 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.554670 4932 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.554712 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.554722 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.554736 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.554764 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.554775 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bxxv\" (UniqueName: \"kubernetes.io/projected/67750e31-ed62-4908-9b56-3a46be936224-kube-api-access-2bxxv\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.562740 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-config-data" (OuterVolumeSpecName: "config-data") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.582850 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.656120 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.656156 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.218911 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67750e31-ed62-4908-9b56-3a46be936224","Type":"ContainerDied","Data":"49ded8c61eff3d7eb04054517499be8ecf50df374bdd44a32ed528213544141a"} Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.219010 4932 scope.go:117] "RemoveContainer" containerID="fd324d05ae668c3f684220e361c41b6ff46379462c08ea7c413014fe4a371e37" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.218952 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.224071 4932 generic.go:334] "Generic (PLEG): container finished" podID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerID="99077386a5dc37e2145b33681651b019f28beed715374edd046c2366a76b2af6" exitCode=0 Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.224107 4932 generic.go:334] "Generic (PLEG): container finished" podID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerID="ec4505e85a78c60e725484af01a4d51a03ebf66c4a5ad9b030f60b812e85e4e3" exitCode=143 Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.224155 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bdfd208a-d781-4471-aa15-5fcbb592ec07","Type":"ContainerDied","Data":"99077386a5dc37e2145b33681651b019f28beed715374edd046c2366a76b2af6"} Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.224212 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bdfd208a-d781-4471-aa15-5fcbb592ec07","Type":"ContainerDied","Data":"ec4505e85a78c60e725484af01a4d51a03ebf66c4a5ad9b030f60b812e85e4e3"} Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.236330 4932 generic.go:334] "Generic (PLEG): container finished" podID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerID="7acb15159d4595ba439a2fb0bc1f02945c077e1367284777296d41f6db4c2909" exitCode=0 Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.236374 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerDied","Data":"7acb15159d4595ba439a2fb0bc1f02945c077e1367284777296d41f6db4c2909"} Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.289352 4932 scope.go:117] "RemoveContainer" containerID="58449f068ea443fd840aa17c5a640ee0e5ae861f046a6ea06594d638db518b63" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.293436 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.312719 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.338205 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:55:32 crc kubenswrapper[4932]: E0218 19:55:32.339664 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-log" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.339686 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-log" Feb 18 19:55:32 crc kubenswrapper[4932]: E0218 19:55:32.339715 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-httpd" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.339722 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-httpd" Feb 18 19:55:32 crc kubenswrapper[4932]: E0218 19:55:32.339753 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.339760 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.340951 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-httpd" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.340996 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.341036 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-log" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.348069 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.357764 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.362705 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.382485 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484131 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-logs\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484219 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljbkd\" (UniqueName: \"kubernetes.io/projected/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-kube-api-access-ljbkd\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484298 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484332 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484374 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484423 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484489 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484592 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586286 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586400 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586465 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-logs\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586513 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljbkd\" (UniqueName: \"kubernetes.io/projected/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-kube-api-access-ljbkd\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586574 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586604 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586661 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586696 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.587078 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.587705 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-logs\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.588286 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.592411 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.593316 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.604743 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.606557 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.607633 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljbkd\" (UniqueName: \"kubernetes.io/projected/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-kube-api-access-ljbkd\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.642440 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.674372 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.797269 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890582 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-logs\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890643 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-config-data\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890663 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4vtw\" (UniqueName: \"kubernetes.io/projected/bdfd208a-d781-4471-aa15-5fcbb592ec07-kube-api-access-m4vtw\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890695 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-httpd-run\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890770 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890800 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-combined-ca-bundle\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890834 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-scripts\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890982 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-internal-tls-certs\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.891826 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.892059 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-logs" (OuterVolumeSpecName: "logs") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.906638 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-scripts" (OuterVolumeSpecName: "scripts") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.910218 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.910403 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfd208a-d781-4471-aa15-5fcbb592ec07-kube-api-access-m4vtw" (OuterVolumeSpecName: "kube-api-access-m4vtw") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "kube-api-access-m4vtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.941268 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.954394 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-config-data" (OuterVolumeSpecName: "config-data") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.986035 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.992923 4932 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.992953 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.992963 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.992971 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4vtw\" (UniqueName: \"kubernetes.io/projected/bdfd208a-d781-4471-aa15-5fcbb592ec07-kube-api-access-m4vtw\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.992982 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.993011 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.993020 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.993028 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.017538 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.097094 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.212633 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67750e31-ed62-4908-9b56-3a46be936224" path="/var/lib/kubelet/pods/67750e31-ed62-4908-9b56-3a46be936224/volumes" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.250637 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bdfd208a-d781-4471-aa15-5fcbb592ec07","Type":"ContainerDied","Data":"1fd189f5734df90d29419c8abecc4af71db32a09c9c7fb47958213aa32db2369"} Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.250699 4932 scope.go:117] "RemoveContainer" containerID="99077386a5dc37e2145b33681651b019f28beed715374edd046c2366a76b2af6" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.250830 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.257435 4932 generic.go:334] "Generic (PLEG): container finished" podID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerID="352d9d060068cebd5ee94ed059873c579e4c314ed02a4f51b120c4e46c462b6a" exitCode=0 Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.257549 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerDied","Data":"352d9d060068cebd5ee94ed059873c579e4c314ed02a4f51b120c4e46c462b6a"} Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.274446 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.285290 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.292807 4932 scope.go:117] "RemoveContainer" containerID="ec4505e85a78c60e725484af01a4d51a03ebf66c4a5ad9b030f60b812e85e4e3" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.299134 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.313141 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:55:33 crc kubenswrapper[4932]: E0218 19:55:33.313605 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-log" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.313626 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-log" Feb 18 19:55:33 crc kubenswrapper[4932]: E0218 19:55:33.313659 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-httpd" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.313666 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-httpd" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.316746 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-httpd" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.316778 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-log" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.319394 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.323802 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.324850 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.326984 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403268 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403323 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403358 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2c5249-4fcd-404d-8eac-551e66fb93d0-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403378 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403416 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krrdf\" (UniqueName: \"kubernetes.io/projected/cb2c5249-4fcd-404d-8eac-551e66fb93d0-kube-api-access-krrdf\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403505 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403539 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403655 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb2c5249-4fcd-404d-8eac-551e66fb93d0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505041 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505377 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505405 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2c5249-4fcd-404d-8eac-551e66fb93d0-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505423 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505474 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krrdf\" (UniqueName: \"kubernetes.io/projected/cb2c5249-4fcd-404d-8eac-551e66fb93d0-kube-api-access-krrdf\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505513 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505548 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505580 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb2c5249-4fcd-404d-8eac-551e66fb93d0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505749 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.506078 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb2c5249-4fcd-404d-8eac-551e66fb93d0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.506444 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2c5249-4fcd-404d-8eac-551e66fb93d0-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.513906 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.514400 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.516378 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.517191 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.518547 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.522585 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krrdf\" (UniqueName: \"kubernetes.io/projected/cb2c5249-4fcd-404d-8eac-551e66fb93d0-kube-api-access-krrdf\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.543004 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.562726 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.570980 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.673438 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.708827 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-run-httpd\") pod \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.708886 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-sg-core-conf-yaml\") pod \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.708979 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-scripts\") pod \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709044 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-log-httpd\") pod \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709063 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlbvv\" (UniqueName: \"kubernetes.io/projected/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-kube-api-access-dlbvv\") pod \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709088 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-combined-ca-bundle\") pod \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709111 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-config-data\") pod \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709126 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2b53bc70-03d1-4b04-8b5e-bf135aed16bc" (UID: "2b53bc70-03d1-4b04-8b5e-bf135aed16bc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709371 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2b53bc70-03d1-4b04-8b5e-bf135aed16bc" (UID: "2b53bc70-03d1-4b04-8b5e-bf135aed16bc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709526 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709538 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.713622 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-kube-api-access-dlbvv" (OuterVolumeSpecName: "kube-api-access-dlbvv") pod "2b53bc70-03d1-4b04-8b5e-bf135aed16bc" (UID: "2b53bc70-03d1-4b04-8b5e-bf135aed16bc"). InnerVolumeSpecName "kube-api-access-dlbvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.719388 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-scripts" (OuterVolumeSpecName: "scripts") pod "2b53bc70-03d1-4b04-8b5e-bf135aed16bc" (UID: "2b53bc70-03d1-4b04-8b5e-bf135aed16bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.750135 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2b53bc70-03d1-4b04-8b5e-bf135aed16bc" (UID: "2b53bc70-03d1-4b04-8b5e-bf135aed16bc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.811537 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.811567 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlbvv\" (UniqueName: \"kubernetes.io/projected/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-kube-api-access-dlbvv\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.811582 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.838455 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-config-data" (OuterVolumeSpecName: "config-data") pod "2b53bc70-03d1-4b04-8b5e-bf135aed16bc" (UID: "2b53bc70-03d1-4b04-8b5e-bf135aed16bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.855338 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b53bc70-03d1-4b04-8b5e-bf135aed16bc" (UID: "2b53bc70-03d1-4b04-8b5e-bf135aed16bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.912900 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.912934 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.223111 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:55:34 crc kubenswrapper[4932]: W0218 19:55:34.229505 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb2c5249_4fcd_404d_8eac_551e66fb93d0.slice/crio-786aae7086b1df804de18794f8547263047420f36b6c9f086226cf52d2a4440a WatchSource:0}: Error finding container 786aae7086b1df804de18794f8547263047420f36b6c9f086226cf52d2a4440a: Status 404 returned error can't find the container with id 786aae7086b1df804de18794f8547263047420f36b6c9f086226cf52d2a4440a Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.290476 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735","Type":"ContainerStarted","Data":"25fe70ebf103b6cadc41b1f91ef6f1dd1a5a7a4e24f1d3d3fe196fdcf098d722"} Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.290555 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735","Type":"ContainerStarted","Data":"142cd912c87187d95f5584853db611262026e0a8153b39013ff8a7a9378cbfed"} Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.292435 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb2c5249-4fcd-404d-8eac-551e66fb93d0","Type":"ContainerStarted","Data":"786aae7086b1df804de18794f8547263047420f36b6c9f086226cf52d2a4440a"} Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.296458 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerDied","Data":"645c7a2022539be3402561398dbaa367b877bd373e87043a2a883b26e04638ba"} Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.296492 4932 scope.go:117] "RemoveContainer" containerID="004980b5d8abc3f9855902198b1efeb131117fc949d4f8c4b5c8b8b2d74e77fe" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.296599 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.302896 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.343489 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.381997 4932 scope.go:117] "RemoveContainer" containerID="ad61282ffc3a72810e8f042075ed0da414d8b82de84a0ac7c8a5f7db1e3ef9a9" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.431047 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.462033 4932 scope.go:117] "RemoveContainer" containerID="7acb15159d4595ba439a2fb0bc1f02945c077e1367284777296d41f6db4c2909" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.475437 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.499914 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:34 crc kubenswrapper[4932]: E0218 19:55:34.500313 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="proxy-httpd" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500333 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="proxy-httpd" Feb 18 19:55:34 crc kubenswrapper[4932]: E0218 19:55:34.500356 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-central-agent" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500362 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-central-agent" Feb 18 19:55:34 crc kubenswrapper[4932]: E0218 19:55:34.500381 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-notification-agent" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500387 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-notification-agent" Feb 18 19:55:34 crc kubenswrapper[4932]: E0218 19:55:34.500402 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="sg-core" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500409 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="sg-core" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500580 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="proxy-httpd" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500598 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-notification-agent" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500614 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="sg-core" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500622 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-central-agent" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.502628 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.504669 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.505229 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.511832 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.521656 4932 scope.go:117] "RemoveContainer" containerID="352d9d060068cebd5ee94ed059873c579e4c314ed02a4f51b120c4e46c462b6a" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.634360 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-log-httpd\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.634520 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.634552 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.634583 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5x86\" (UniqueName: \"kubernetes.io/projected/f22c0acb-8789-4ba1-8e45-8e456165db99-kube-api-access-k5x86\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.634630 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-run-httpd\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.634884 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-config-data\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.634968 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-scripts\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.736837 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5x86\" (UniqueName: \"kubernetes.io/projected/f22c0acb-8789-4ba1-8e45-8e456165db99-kube-api-access-k5x86\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.736932 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-run-httpd\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.736962 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-config-data\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.736984 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-scripts\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.737026 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-log-httpd\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.737112 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.737129 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.738337 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-log-httpd\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.738354 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-run-httpd\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.744432 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.746919 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-scripts\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.747089 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-config-data\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.754243 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5x86\" (UniqueName: \"kubernetes.io/projected/f22c0acb-8789-4ba1-8e45-8e456165db99-kube-api-access-k5x86\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.759058 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.827290 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:35 crc kubenswrapper[4932]: I0218 19:55:35.201934 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" path="/var/lib/kubelet/pods/2b53bc70-03d1-4b04-8b5e-bf135aed16bc/volumes" Feb 18 19:55:35 crc kubenswrapper[4932]: I0218 19:55:35.203440 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" path="/var/lib/kubelet/pods/bdfd208a-d781-4471-aa15-5fcbb592ec07/volumes" Feb 18 19:55:35 crc kubenswrapper[4932]: I0218 19:55:35.288121 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:35 crc kubenswrapper[4932]: I0218 19:55:35.382704 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735","Type":"ContainerStarted","Data":"179316e850b98a1ff2ef03813dbe9079b39fb09e9b14ba7f8ff6facb6fd83f93"} Feb 18 19:55:35 crc kubenswrapper[4932]: I0218 19:55:35.398706 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb2c5249-4fcd-404d-8eac-551e66fb93d0","Type":"ContainerStarted","Data":"8cc6ec2054f165251b0303c9d801ed55f1494727efd6a488b1607afbef5447eb"} Feb 18 19:55:35 crc kubenswrapper[4932]: I0218 19:55:35.424501 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.424478989 podStartE2EDuration="3.424478989s" podCreationTimestamp="2026-02-18 19:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:35.409585739 +0000 UTC m=+1298.991540594" watchObservedRunningTime="2026-02-18 19:55:35.424478989 +0000 UTC m=+1299.006433824" Feb 18 19:55:36 crc kubenswrapper[4932]: I0218 19:55:36.414280 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerStarted","Data":"4f612b79e40b95e6fef0e37a0198be25f5c486cd3ca03eaa4c43b2840baeb770"} Feb 18 19:55:36 crc kubenswrapper[4932]: I0218 19:55:36.414814 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerStarted","Data":"a42357a04eca2427447a527f9b884286ac30d97b8bf59de7d2cd9869618e566a"} Feb 18 19:55:36 crc kubenswrapper[4932]: I0218 19:55:36.414827 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerStarted","Data":"a620663d217c47ddd4628558591f9269acb00cf7b394dcbb5dec8251391d19e8"} Feb 18 19:55:36 crc kubenswrapper[4932]: I0218 19:55:36.416459 4932 generic.go:334] "Generic (PLEG): container finished" podID="c88334ec-64f6-41ba-aee5-d5323e8c0c25" containerID="4e7866a2ddd0a42f76d440fa6b1c16f63d3f4f13968f3f538f0dc810522b826b" exitCode=0 Feb 18 19:55:36 crc kubenswrapper[4932]: I0218 19:55:36.416546 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-64b8m" event={"ID":"c88334ec-64f6-41ba-aee5-d5323e8c0c25","Type":"ContainerDied","Data":"4e7866a2ddd0a42f76d440fa6b1c16f63d3f4f13968f3f538f0dc810522b826b"} Feb 18 19:55:36 crc kubenswrapper[4932]: I0218 19:55:36.419135 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb2c5249-4fcd-404d-8eac-551e66fb93d0","Type":"ContainerStarted","Data":"28346a7ce5cb4a646795e2f1b49dd135ef4701e8b2af33723542571933b8aee2"} Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.233907 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.233882866 podStartE2EDuration="4.233882866s" podCreationTimestamp="2026-02-18 19:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:36.462577359 +0000 UTC m=+1300.044532204" watchObservedRunningTime="2026-02-18 19:55:37.233882866 +0000 UTC m=+1300.815837721" Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.454215 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerStarted","Data":"2697d0543e7fe8649877a6210966590083b7e47b807f2346f64c28d10d502f59"} Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.799903 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.918682 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-config-data\") pod \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.918800 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-combined-ca-bundle\") pod \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.918867 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-scripts\") pod \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.919026 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m25dj\" (UniqueName: \"kubernetes.io/projected/c88334ec-64f6-41ba-aee5-d5323e8c0c25-kube-api-access-m25dj\") pod \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.926833 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c88334ec-64f6-41ba-aee5-d5323e8c0c25-kube-api-access-m25dj" (OuterVolumeSpecName: "kube-api-access-m25dj") pod "c88334ec-64f6-41ba-aee5-d5323e8c0c25" (UID: "c88334ec-64f6-41ba-aee5-d5323e8c0c25"). InnerVolumeSpecName "kube-api-access-m25dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.942616 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-scripts" (OuterVolumeSpecName: "scripts") pod "c88334ec-64f6-41ba-aee5-d5323e8c0c25" (UID: "c88334ec-64f6-41ba-aee5-d5323e8c0c25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.963445 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c88334ec-64f6-41ba-aee5-d5323e8c0c25" (UID: "c88334ec-64f6-41ba-aee5-d5323e8c0c25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.965541 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-config-data" (OuterVolumeSpecName: "config-data") pod "c88334ec-64f6-41ba-aee5-d5323e8c0c25" (UID: "c88334ec-64f6-41ba-aee5-d5323e8c0c25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.020836 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m25dj\" (UniqueName: \"kubernetes.io/projected/c88334ec-64f6-41ba-aee5-d5323e8c0c25-kube-api-access-m25dj\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.021114 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.021125 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.021133 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.503585 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-64b8m" event={"ID":"c88334ec-64f6-41ba-aee5-d5323e8c0c25","Type":"ContainerDied","Data":"956c5d87f252b7fd42789858690e26fc0ca0b5a7be8c4cc152d63b1bddc300e7"} Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.503629 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="956c5d87f252b7fd42789858690e26fc0ca0b5a7be8c4cc152d63b1bddc300e7" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.503688 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.591872 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:55:38 crc kubenswrapper[4932]: E0218 19:55:38.592701 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c88334ec-64f6-41ba-aee5-d5323e8c0c25" containerName="nova-cell0-conductor-db-sync" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.592727 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c88334ec-64f6-41ba-aee5-d5323e8c0c25" containerName="nova-cell0-conductor-db-sync" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.592996 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c88334ec-64f6-41ba-aee5-d5323e8c0c25" containerName="nova-cell0-conductor-db-sync" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.593865 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.597151 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rd8q2" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.597466 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.607305 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.736439 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a4ef33d-657c-4785-9c64-7bb797728924-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.736521 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jckq2\" (UniqueName: \"kubernetes.io/projected/7a4ef33d-657c-4785-9c64-7bb797728924-kube-api-access-jckq2\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.736942 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4ef33d-657c-4785-9c64-7bb797728924-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.838263 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4ef33d-657c-4785-9c64-7bb797728924-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.838352 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a4ef33d-657c-4785-9c64-7bb797728924-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.838378 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jckq2\" (UniqueName: \"kubernetes.io/projected/7a4ef33d-657c-4785-9c64-7bb797728924-kube-api-access-jckq2\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.846227 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a4ef33d-657c-4785-9c64-7bb797728924-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.846957 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4ef33d-657c-4785-9c64-7bb797728924-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.856566 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jckq2\" (UniqueName: \"kubernetes.io/projected/7a4ef33d-657c-4785-9c64-7bb797728924-kube-api-access-jckq2\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.925968 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:39 crc kubenswrapper[4932]: I0218 19:55:39.436862 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:55:39 crc kubenswrapper[4932]: W0218 19:55:39.439870 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a4ef33d_657c_4785_9c64_7bb797728924.slice/crio-12970eaa7b7dcfb7c55147ddac04f2e785903422742c5f3f12f8d4ab2607db59 WatchSource:0}: Error finding container 12970eaa7b7dcfb7c55147ddac04f2e785903422742c5f3f12f8d4ab2607db59: Status 404 returned error can't find the container with id 12970eaa7b7dcfb7c55147ddac04f2e785903422742c5f3f12f8d4ab2607db59 Feb 18 19:55:39 crc kubenswrapper[4932]: I0218 19:55:39.523364 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerStarted","Data":"235fb721bf81fe59350072741d94ffeb2cb2dcf4dda7a36192f0baba9a50695d"} Feb 18 19:55:39 crc kubenswrapper[4932]: I0218 19:55:39.523634 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:55:39 crc kubenswrapper[4932]: I0218 19:55:39.532551 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7a4ef33d-657c-4785-9c64-7bb797728924","Type":"ContainerStarted","Data":"12970eaa7b7dcfb7c55147ddac04f2e785903422742c5f3f12f8d4ab2607db59"} Feb 18 19:55:39 crc kubenswrapper[4932]: I0218 19:55:39.550032 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.513347359 podStartE2EDuration="5.550013218s" podCreationTimestamp="2026-02-18 19:55:34 +0000 UTC" firstStartedPulling="2026-02-18 19:55:35.372340596 +0000 UTC m=+1298.954295441" lastFinishedPulling="2026-02-18 19:55:38.409006445 +0000 UTC m=+1301.990961300" observedRunningTime="2026-02-18 19:55:39.544745787 +0000 UTC m=+1303.126700652" watchObservedRunningTime="2026-02-18 19:55:39.550013218 +0000 UTC m=+1303.131968063" Feb 18 19:55:40 crc kubenswrapper[4932]: I0218 19:55:40.546655 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7a4ef33d-657c-4785-9c64-7bb797728924","Type":"ContainerStarted","Data":"9dc9ef7edd603b01dba71c362976e69bd74fe6c5c533cbd5ca93d7f2cf9c6180"} Feb 18 19:55:40 crc kubenswrapper[4932]: I0218 19:55:40.547775 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:40 crc kubenswrapper[4932]: I0218 19:55:40.591232 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.591206706 podStartE2EDuration="2.591206706s" podCreationTimestamp="2026-02-18 19:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:40.581735862 +0000 UTC m=+1304.163690757" watchObservedRunningTime="2026-02-18 19:55:40.591206706 +0000 UTC m=+1304.173161581" Feb 18 19:55:42 crc kubenswrapper[4932]: I0218 19:55:42.674836 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:55:42 crc kubenswrapper[4932]: I0218 19:55:42.676371 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:55:42 crc kubenswrapper[4932]: I0218 19:55:42.703849 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:55:42 crc kubenswrapper[4932]: I0218 19:55:42.723078 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:55:43 crc kubenswrapper[4932]: I0218 19:55:43.579950 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:55:43 crc kubenswrapper[4932]: I0218 19:55:43.580030 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:55:43 crc kubenswrapper[4932]: I0218 19:55:43.674117 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:43 crc kubenswrapper[4932]: I0218 19:55:43.674243 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:43 crc kubenswrapper[4932]: I0218 19:55:43.717761 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:43 crc kubenswrapper[4932]: I0218 19:55:43.756282 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:44 crc kubenswrapper[4932]: I0218 19:55:44.588937 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:44 crc kubenswrapper[4932]: I0218 19:55:44.588978 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:45 crc kubenswrapper[4932]: I0218 19:55:45.313973 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:55:45 crc kubenswrapper[4932]: I0218 19:55:45.344016 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:55:46 crc kubenswrapper[4932]: I0218 19:55:46.375774 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:46 crc kubenswrapper[4932]: I0218 19:55:46.380255 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:48 crc kubenswrapper[4932]: I0218 19:55:48.976837 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.488053 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-xlzdb"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.489324 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.491923 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.492031 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.499280 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xlzdb"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.556426 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-config-data\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.556500 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-scripts\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.556606 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.556886 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76srz\" (UniqueName: \"kubernetes.io/projected/6473c7ac-af7d-4556-aa86-28aabc85694a-kube-api-access-76srz\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.637343 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.638733 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.642611 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.658382 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76srz\" (UniqueName: \"kubernetes.io/projected/6473c7ac-af7d-4556-aa86-28aabc85694a-kube-api-access-76srz\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.658451 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-config-data\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.658484 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-scripts\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.658531 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.668843 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-config-data\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.684899 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.686290 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-scripts\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.704097 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76srz\" (UniqueName: \"kubernetes.io/projected/6473c7ac-af7d-4556-aa86-28aabc85694a-kube-api-access-76srz\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.715219 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.760296 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb946\" (UniqueName: \"kubernetes.io/projected/59185a09-938b-47ba-99ed-1b81362038e0-kube-api-access-cb946\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.760348 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.760367 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.768367 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.769538 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.772507 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.801491 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.803183 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.813420 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.819432 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.864964 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865038 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a9bdea-8dd1-4825-971a-36c348e2a918-logs\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865065 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-config-data\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865105 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jf96\" (UniqueName: \"kubernetes.io/projected/3e97df52-5201-479d-aae1-ac0c36e3ea63-kube-api-access-7jf96\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865128 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-config-data\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865193 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb946\" (UniqueName: \"kubernetes.io/projected/59185a09-938b-47ba-99ed-1b81362038e0-kube-api-access-cb946\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865224 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865246 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865269 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzznz\" (UniqueName: \"kubernetes.io/projected/34a9bdea-8dd1-4825-971a-36c348e2a918-kube-api-access-vzznz\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865300 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.870575 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.874737 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.876325 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.944192 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.953264 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb946\" (UniqueName: \"kubernetes.io/projected/59185a09-938b-47ba-99ed-1b81362038e0-kube-api-access-cb946\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.965470 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.966681 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jf96\" (UniqueName: \"kubernetes.io/projected/3e97df52-5201-479d-aae1-ac0c36e3ea63-kube-api-access-7jf96\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.966714 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-config-data\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.966785 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzznz\" (UniqueName: \"kubernetes.io/projected/34a9bdea-8dd1-4825-971a-36c348e2a918-kube-api-access-vzznz\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.966811 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.966856 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.966894 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a9bdea-8dd1-4825-971a-36c348e2a918-logs\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.966911 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-config-data\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.970550 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.972197 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.973441 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a9bdea-8dd1-4825-971a-36c348e2a918-logs\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.978733 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.978948 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.980122 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.981008 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-config-data\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.983855 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-config-data\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.009801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jf96\" (UniqueName: \"kubernetes.io/projected/3e97df52-5201-479d-aae1-ac0c36e3ea63-kube-api-access-7jf96\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.020934 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzznz\" (UniqueName: \"kubernetes.io/projected/34a9bdea-8dd1-4825-971a-36c348e2a918-kube-api-access-vzznz\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.063336 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.068981 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-config-data\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.069052 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsfs8\" (UniqueName: \"kubernetes.io/projected/a445a66f-1685-4542-89c3-012fef147a76-kube-api-access-xsfs8\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.069090 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a445a66f-1685-4542-89c3-012fef147a76-logs\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.069275 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.086604 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-87f66f8bf-sszng"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.089943 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.109254 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-87f66f8bf-sszng"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.125640 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.167965 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170262 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsfs8\" (UniqueName: \"kubernetes.io/projected/a445a66f-1685-4542-89c3-012fef147a76-kube-api-access-xsfs8\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170302 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a445a66f-1685-4542-89c3-012fef147a76-logs\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170326 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-sb\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170352 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-swift-storage-0\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170370 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-svc\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170455 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-nb\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170482 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjq9z\" (UniqueName: \"kubernetes.io/projected/c89ff872-244d-428a-a29c-3b9adeae5c0c-kube-api-access-cjq9z\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170514 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-config\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170570 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170597 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-config-data\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170687 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a445a66f-1685-4542-89c3-012fef147a76-logs\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.174013 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.174886 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-config-data\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.187394 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsfs8\" (UniqueName: \"kubernetes.io/projected/a445a66f-1685-4542-89c3-012fef147a76-kube-api-access-xsfs8\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.273183 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-config\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.273310 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-sb\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.273334 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-swift-storage-0\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.273357 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-svc\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.273417 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-nb\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.273453 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjq9z\" (UniqueName: \"kubernetes.io/projected/c89ff872-244d-428a-a29c-3b9adeae5c0c-kube-api-access-cjq9z\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.274498 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-config\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.277117 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-swift-storage-0\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.277607 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-svc\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.280340 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-sb\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.282982 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-nb\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.291657 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjq9z\" (UniqueName: \"kubernetes.io/projected/c89ff872-244d-428a-a29c-3b9adeae5c0c-kube-api-access-cjq9z\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.361634 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.424526 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.521148 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xlzdb"] Feb 18 19:55:50 crc kubenswrapper[4932]: W0218 19:55:50.531161 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6473c7ac_af7d_4556_aa86_28aabc85694a.slice/crio-0726c55d4787049d77ae93d959e7adf39c17f94bb13160b743ceddf464afc9d7 WatchSource:0}: Error finding container 0726c55d4787049d77ae93d959e7adf39c17f94bb13160b743ceddf464afc9d7: Status 404 returned error can't find the container with id 0726c55d4787049d77ae93d959e7adf39c17f94bb13160b743ceddf464afc9d7 Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.634939 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.660231 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f756w"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.661969 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.667555 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.667739 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.685679 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-scripts\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.685993 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzw4x\" (UniqueName: \"kubernetes.io/projected/5d3a07cf-a084-46a0-8ca2-830e0838d575-kube-api-access-bzw4x\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.686026 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-config-data\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.686044 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.688247 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f756w"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.692565 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xlzdb" event={"ID":"6473c7ac-af7d-4556-aa86-28aabc85694a","Type":"ContainerStarted","Data":"0726c55d4787049d77ae93d959e7adf39c17f94bb13160b743ceddf464afc9d7"} Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.760148 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.787332 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.789010 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzw4x\" (UniqueName: \"kubernetes.io/projected/5d3a07cf-a084-46a0-8ca2-830e0838d575-kube-api-access-bzw4x\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.789065 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-config-data\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.789086 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.789187 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-scripts\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.795517 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-config-data\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.799801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.800146 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-scripts\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.810609 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzw4x\" (UniqueName: \"kubernetes.io/projected/5d3a07cf-a084-46a0-8ca2-830e0838d575-kube-api-access-bzw4x\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.903312 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.019717 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:55:51 crc kubenswrapper[4932]: W0218 19:55:51.055412 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda445a66f_1685_4542_89c3_012fef147a76.slice/crio-db9aed346e952a35a560a0f801674a02f5a8f28572c2af0ca6ba733c50ec6e31 WatchSource:0}: Error finding container db9aed346e952a35a560a0f801674a02f5a8f28572c2af0ca6ba733c50ec6e31: Status 404 returned error can't find the container with id db9aed346e952a35a560a0f801674a02f5a8f28572c2af0ca6ba733c50ec6e31 Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.233685 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-87f66f8bf-sszng"] Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.474208 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f756w"] Feb 18 19:55:51 crc kubenswrapper[4932]: W0218 19:55:51.503607 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d3a07cf_a084_46a0_8ca2_830e0838d575.slice/crio-1f840d02307e281dd77d4c46053f73ca0e900cd59304c1e3d8da776e2b0a46e0 WatchSource:0}: Error finding container 1f840d02307e281dd77d4c46053f73ca0e900cd59304c1e3d8da776e2b0a46e0: Status 404 returned error can't find the container with id 1f840d02307e281dd77d4c46053f73ca0e900cd59304c1e3d8da776e2b0a46e0 Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.768264 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a445a66f-1685-4542-89c3-012fef147a76","Type":"ContainerStarted","Data":"db9aed346e952a35a560a0f801674a02f5a8f28572c2af0ca6ba733c50ec6e31"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.786794 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3e97df52-5201-479d-aae1-ac0c36e3ea63","Type":"ContainerStarted","Data":"5e68c76538cd952d6f6a3dd14aebb40e0d4b05858a3c9289e0a5ad892f731528"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.789819 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f756w" event={"ID":"5d3a07cf-a084-46a0-8ca2-830e0838d575","Type":"ContainerStarted","Data":"1f840d02307e281dd77d4c46053f73ca0e900cd59304c1e3d8da776e2b0a46e0"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.792509 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"59185a09-938b-47ba-99ed-1b81362038e0","Type":"ContainerStarted","Data":"372bb3654ce51919b696e3d9eb989784a6ab397c40b87b62e7e2d42b5443d7b8"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.793995 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a9bdea-8dd1-4825-971a-36c348e2a918","Type":"ContainerStarted","Data":"a03c13b667e70bffdfe5ae8206b4073cc9d064e02a2aa3bfd907faed67753e61"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.795904 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xlzdb" event={"ID":"6473c7ac-af7d-4556-aa86-28aabc85694a","Type":"ContainerStarted","Data":"7ff7e9bf05a2ba3237ddc130003a316b61a512ddd8b5c858384cd739b41a1cfd"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.805686 4932 generic.go:334] "Generic (PLEG): container finished" podID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerID="efa1de95f92b6f71ab718eba81f5146f37d50f46643463b88203e329ebaceb9a" exitCode=0 Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.805744 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" event={"ID":"c89ff872-244d-428a-a29c-3b9adeae5c0c","Type":"ContainerDied","Data":"efa1de95f92b6f71ab718eba81f5146f37d50f46643463b88203e329ebaceb9a"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.805775 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" event={"ID":"c89ff872-244d-428a-a29c-3b9adeae5c0c","Type":"ContainerStarted","Data":"3a5bcecade0b5dff94560cc8f3a4637b00cd9cdde3e3372019fd257bdc54822e"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.823078 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-f756w" podStartSLOduration=1.823057751 podStartE2EDuration="1.823057751s" podCreationTimestamp="2026-02-18 19:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:51.812878649 +0000 UTC m=+1315.394833494" watchObservedRunningTime="2026-02-18 19:55:51.823057751 +0000 UTC m=+1315.405012596" Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.840749 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-xlzdb" podStartSLOduration=2.8407345189999997 podStartE2EDuration="2.840734519s" podCreationTimestamp="2026-02-18 19:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:51.836472484 +0000 UTC m=+1315.418427329" watchObservedRunningTime="2026-02-18 19:55:51.840734519 +0000 UTC m=+1315.422689364" Feb 18 19:55:52 crc kubenswrapper[4932]: I0218 19:55:52.825777 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f756w" event={"ID":"5d3a07cf-a084-46a0-8ca2-830e0838d575","Type":"ContainerStarted","Data":"9bb9eedee5db3508051ad5cf9468f19b751623f5c59dfbe177da134d00b7fc1f"} Feb 18 19:55:52 crc kubenswrapper[4932]: I0218 19:55:52.833882 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" event={"ID":"c89ff872-244d-428a-a29c-3b9adeae5c0c","Type":"ContainerStarted","Data":"3c2dd4d6051f054d8ec462a813d59a2da849d9297e15a4c7e5cbe0de8d6eca93"} Feb 18 19:55:52 crc kubenswrapper[4932]: I0218 19:55:52.863442 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" podStartSLOduration=3.863377078 podStartE2EDuration="3.863377078s" podCreationTimestamp="2026-02-18 19:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:52.855354299 +0000 UTC m=+1316.437309144" watchObservedRunningTime="2026-02-18 19:55:52.863377078 +0000 UTC m=+1316.445331923" Feb 18 19:55:53 crc kubenswrapper[4932]: I0218 19:55:53.827367 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:53 crc kubenswrapper[4932]: I0218 19:55:53.843181 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:53 crc kubenswrapper[4932]: I0218 19:55:53.862423 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.876650 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"59185a09-938b-47ba-99ed-1b81362038e0","Type":"ContainerStarted","Data":"f581b8c9ce44e42d3ff03f376a0f68bc8c6d3dd65d58f6d7b80411f3452dd5a6"} Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.877546 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="59185a09-938b-47ba-99ed-1b81362038e0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f581b8c9ce44e42d3ff03f376a0f68bc8c6d3dd65d58f6d7b80411f3452dd5a6" gracePeriod=30 Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.884780 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a9bdea-8dd1-4825-971a-36c348e2a918","Type":"ContainerStarted","Data":"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69"} Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.884823 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a9bdea-8dd1-4825-971a-36c348e2a918","Type":"ContainerStarted","Data":"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971"} Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.885221 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-log" containerID="cri-o://3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971" gracePeriod=30 Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.885426 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-metadata" containerID="cri-o://a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69" gracePeriod=30 Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.892801 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a445a66f-1685-4542-89c3-012fef147a76","Type":"ContainerStarted","Data":"5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9"} Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.892859 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a445a66f-1685-4542-89c3-012fef147a76","Type":"ContainerStarted","Data":"4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4"} Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.897782 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3e97df52-5201-479d-aae1-ac0c36e3ea63","Type":"ContainerStarted","Data":"dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0"} Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.961600 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.81816397 podStartE2EDuration="6.961554333s" podCreationTimestamp="2026-02-18 19:55:49 +0000 UTC" firstStartedPulling="2026-02-18 19:55:50.689861531 +0000 UTC m=+1314.271816376" lastFinishedPulling="2026-02-18 19:55:54.833251894 +0000 UTC m=+1318.415206739" observedRunningTime="2026-02-18 19:55:55.913728577 +0000 UTC m=+1319.495683412" watchObservedRunningTime="2026-02-18 19:55:55.961554333 +0000 UTC m=+1319.543509188" Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.969849 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.204336676 podStartE2EDuration="6.969833638s" podCreationTimestamp="2026-02-18 19:55:49 +0000 UTC" firstStartedPulling="2026-02-18 19:55:51.067442384 +0000 UTC m=+1314.649397229" lastFinishedPulling="2026-02-18 19:55:54.832939346 +0000 UTC m=+1318.414894191" observedRunningTime="2026-02-18 19:55:55.963292256 +0000 UTC m=+1319.545247111" watchObservedRunningTime="2026-02-18 19:55:55.969833638 +0000 UTC m=+1319.551788483" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.008148 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.952273656 podStartE2EDuration="7.008130218s" podCreationTimestamp="2026-02-18 19:55:49 +0000 UTC" firstStartedPulling="2026-02-18 19:55:50.780742615 +0000 UTC m=+1314.362697460" lastFinishedPulling="2026-02-18 19:55:54.836599177 +0000 UTC m=+1318.418554022" observedRunningTime="2026-02-18 19:55:55.983872726 +0000 UTC m=+1319.565827571" watchObservedRunningTime="2026-02-18 19:55:56.008130218 +0000 UTC m=+1319.590085063" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.011993 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.956400857 podStartE2EDuration="7.011976193s" podCreationTimestamp="2026-02-18 19:55:49 +0000 UTC" firstStartedPulling="2026-02-18 19:55:50.780488188 +0000 UTC m=+1314.362443033" lastFinishedPulling="2026-02-18 19:55:54.836063524 +0000 UTC m=+1318.418018369" observedRunningTime="2026-02-18 19:55:56.006105267 +0000 UTC m=+1319.588060112" watchObservedRunningTime="2026-02-18 19:55:56.011976193 +0000 UTC m=+1319.593931038" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.504785 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.556680 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-config-data\") pod \"34a9bdea-8dd1-4825-971a-36c348e2a918\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.556743 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-combined-ca-bundle\") pod \"34a9bdea-8dd1-4825-971a-36c348e2a918\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.556900 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzznz\" (UniqueName: \"kubernetes.io/projected/34a9bdea-8dd1-4825-971a-36c348e2a918-kube-api-access-vzznz\") pod \"34a9bdea-8dd1-4825-971a-36c348e2a918\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.556962 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a9bdea-8dd1-4825-971a-36c348e2a918-logs\") pod \"34a9bdea-8dd1-4825-971a-36c348e2a918\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.557834 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a9bdea-8dd1-4825-971a-36c348e2a918-logs" (OuterVolumeSpecName: "logs") pod "34a9bdea-8dd1-4825-971a-36c348e2a918" (UID: "34a9bdea-8dd1-4825-971a-36c348e2a918"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.564069 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a9bdea-8dd1-4825-971a-36c348e2a918-kube-api-access-vzznz" (OuterVolumeSpecName: "kube-api-access-vzznz") pod "34a9bdea-8dd1-4825-971a-36c348e2a918" (UID: "34a9bdea-8dd1-4825-971a-36c348e2a918"). InnerVolumeSpecName "kube-api-access-vzznz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.594491 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34a9bdea-8dd1-4825-971a-36c348e2a918" (UID: "34a9bdea-8dd1-4825-971a-36c348e2a918"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.596339 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-config-data" (OuterVolumeSpecName: "config-data") pod "34a9bdea-8dd1-4825-971a-36c348e2a918" (UID: "34a9bdea-8dd1-4825-971a-36c348e2a918"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.660032 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzznz\" (UniqueName: \"kubernetes.io/projected/34a9bdea-8dd1-4825-971a-36c348e2a918-kube-api-access-vzznz\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.660103 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a9bdea-8dd1-4825-971a-36c348e2a918-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.660123 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.660141 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.914167 4932 generic.go:334] "Generic (PLEG): container finished" podID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerID="a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69" exitCode=0 Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.914214 4932 generic.go:334] "Generic (PLEG): container finished" podID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerID="3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971" exitCode=143 Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.914231 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a9bdea-8dd1-4825-971a-36c348e2a918","Type":"ContainerDied","Data":"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69"} Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.914284 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a9bdea-8dd1-4825-971a-36c348e2a918","Type":"ContainerDied","Data":"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971"} Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.914299 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a9bdea-8dd1-4825-971a-36c348e2a918","Type":"ContainerDied","Data":"a03c13b667e70bffdfe5ae8206b4073cc9d064e02a2aa3bfd907faed67753e61"} Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.914320 4932 scope.go:117] "RemoveContainer" containerID="a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.915410 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.962985 4932 scope.go:117] "RemoveContainer" containerID="3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.963613 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.999308 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.009117 4932 scope.go:117] "RemoveContainer" containerID="a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69" Feb 18 19:55:57 crc kubenswrapper[4932]: E0218 19:55:57.010054 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69\": container with ID starting with a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69 not found: ID does not exist" containerID="a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.010169 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69"} err="failed to get container status \"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69\": rpc error: code = NotFound desc = could not find container \"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69\": container with ID starting with a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69 not found: ID does not exist" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.010293 4932 scope.go:117] "RemoveContainer" containerID="3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.012524 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:57 crc kubenswrapper[4932]: E0218 19:55:57.012782 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971\": container with ID starting with 3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971 not found: ID does not exist" containerID="3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.012982 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971"} err="failed to get container status \"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971\": rpc error: code = NotFound desc = could not find container \"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971\": container with ID starting with 3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971 not found: ID does not exist" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013116 4932 scope.go:117] "RemoveContainer" containerID="a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69" Feb 18 19:55:57 crc kubenswrapper[4932]: E0218 19:55:57.013265 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-log" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013319 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-log" Feb 18 19:55:57 crc kubenswrapper[4932]: E0218 19:55:57.013370 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-metadata" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013382 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-metadata" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013638 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69"} err="failed to get container status \"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69\": rpc error: code = NotFound desc = could not find container \"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69\": container with ID starting with a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69 not found: ID does not exist" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013664 4932 scope.go:117] "RemoveContainer" containerID="3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013686 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-log" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013744 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-metadata" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013998 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971"} err="failed to get container status \"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971\": rpc error: code = NotFound desc = could not find container \"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971\": container with ID starting with 3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971 not found: ID does not exist" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.015369 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.025102 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.026162 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.081983 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-logs\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.082222 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-config-data\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.082269 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.082391 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvdwg\" (UniqueName: \"kubernetes.io/projected/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-kube-api-access-bvdwg\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.082781 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.084759 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.185505 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-config-data\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.185543 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.185595 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvdwg\" (UniqueName: \"kubernetes.io/projected/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-kube-api-access-bvdwg\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.185661 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.185693 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-logs\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.186074 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-logs\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.191116 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.197691 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.197938 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.201642 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" path="/var/lib/kubelet/pods/34a9bdea-8dd1-4825-971a-36c348e2a918/volumes" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.211155 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.211932 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-config-data\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.244617 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvdwg\" (UniqueName: \"kubernetes.io/projected/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-kube-api-access-bvdwg\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.356674 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.606423 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.606742 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.844076 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.959210 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad","Type":"ContainerStarted","Data":"ac08c2a96822adae87e627ab8c6ab7ba89e03640acee4a44553d726b259966e3"} Feb 18 19:55:58 crc kubenswrapper[4932]: I0218 19:55:58.975278 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad","Type":"ContainerStarted","Data":"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7"} Feb 18 19:55:58 crc kubenswrapper[4932]: I0218 19:55:58.975696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad","Type":"ContainerStarted","Data":"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505"} Feb 18 19:55:59 crc kubenswrapper[4932]: I0218 19:55:59.000737 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.000714474 podStartE2EDuration="3.000714474s" podCreationTimestamp="2026-02-18 19:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:59.000708214 +0000 UTC m=+1322.582663069" watchObservedRunningTime="2026-02-18 19:55:59.000714474 +0000 UTC m=+1322.582669329" Feb 18 19:55:59 crc kubenswrapper[4932]: I0218 19:55:59.966025 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.126881 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.126991 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.165400 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.362550 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.362641 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.427482 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.507610 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-855cb46c75-kwghr"] Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.507852 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" podUID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerName="dnsmasq-dns" containerID="cri-o://2dd4d65476d1505ac595577a77e37ccd6902dc5b61d39daf8b0813fba6426e5c" gracePeriod=10 Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.003725 4932 generic.go:334] "Generic (PLEG): container finished" podID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerID="2dd4d65476d1505ac595577a77e37ccd6902dc5b61d39daf8b0813fba6426e5c" exitCode=0 Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.003832 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" event={"ID":"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616","Type":"ContainerDied","Data":"2dd4d65476d1505ac595577a77e37ccd6902dc5b61d39daf8b0813fba6426e5c"} Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.004367 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" event={"ID":"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616","Type":"ContainerDied","Data":"abf4fe1aeef8ebb3bf6d40f6b972486e6ef67f658fa19698f1bd32267dd142b9"} Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.004463 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abf4fe1aeef8ebb3bf6d40f6b972486e6ef67f658fa19698f1bd32267dd142b9" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.006263 4932 generic.go:334] "Generic (PLEG): container finished" podID="6473c7ac-af7d-4556-aa86-28aabc85694a" containerID="7ff7e9bf05a2ba3237ddc130003a316b61a512ddd8b5c858384cd739b41a1cfd" exitCode=0 Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.006310 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xlzdb" event={"ID":"6473c7ac-af7d-4556-aa86-28aabc85694a","Type":"ContainerDied","Data":"7ff7e9bf05a2ba3237ddc130003a316b61a512ddd8b5c858384cd739b41a1cfd"} Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.039461 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.046793 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.077460 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-svc\") pod \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.077515 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9qmk\" (UniqueName: \"kubernetes.io/projected/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-kube-api-access-b9qmk\") pod \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.077562 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-nb\") pod \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.077600 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-swift-storage-0\") pod \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.077657 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-sb\") pod \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.077826 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-config\") pod \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.098409 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-kube-api-access-b9qmk" (OuterVolumeSpecName: "kube-api-access-b9qmk") pod "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" (UID: "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616"). InnerVolumeSpecName "kube-api-access-b9qmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.183842 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9qmk\" (UniqueName: \"kubernetes.io/projected/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-kube-api-access-b9qmk\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.278900 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-config" (OuterVolumeSpecName: "config") pod "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" (UID: "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.285618 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.286675 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" (UID: "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.297848 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" (UID: "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.310898 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" (UID: "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.317137 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" (UID: "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.387855 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.387914 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.387925 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.387937 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.446351 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.446361 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.020547 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.065800 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-855cb46c75-kwghr"] Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.090726 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-855cb46c75-kwghr"] Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.357405 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.357684 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.506723 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.618822 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76srz\" (UniqueName: \"kubernetes.io/projected/6473c7ac-af7d-4556-aa86-28aabc85694a-kube-api-access-76srz\") pod \"6473c7ac-af7d-4556-aa86-28aabc85694a\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.618864 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-combined-ca-bundle\") pod \"6473c7ac-af7d-4556-aa86-28aabc85694a\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.618994 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-config-data\") pod \"6473c7ac-af7d-4556-aa86-28aabc85694a\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.619037 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-scripts\") pod \"6473c7ac-af7d-4556-aa86-28aabc85694a\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.627602 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6473c7ac-af7d-4556-aa86-28aabc85694a-kube-api-access-76srz" (OuterVolumeSpecName: "kube-api-access-76srz") pod "6473c7ac-af7d-4556-aa86-28aabc85694a" (UID: "6473c7ac-af7d-4556-aa86-28aabc85694a"). InnerVolumeSpecName "kube-api-access-76srz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.628784 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-scripts" (OuterVolumeSpecName: "scripts") pod "6473c7ac-af7d-4556-aa86-28aabc85694a" (UID: "6473c7ac-af7d-4556-aa86-28aabc85694a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.647847 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6473c7ac-af7d-4556-aa86-28aabc85694a" (UID: "6473c7ac-af7d-4556-aa86-28aabc85694a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.659324 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-config-data" (OuterVolumeSpecName: "config-data") pod "6473c7ac-af7d-4556-aa86-28aabc85694a" (UID: "6473c7ac-af7d-4556-aa86-28aabc85694a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.721319 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.721353 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.721366 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76srz\" (UniqueName: \"kubernetes.io/projected/6473c7ac-af7d-4556-aa86-28aabc85694a-kube-api-access-76srz\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.721379 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.032068 4932 generic.go:334] "Generic (PLEG): container finished" podID="5d3a07cf-a084-46a0-8ca2-830e0838d575" containerID="9bb9eedee5db3508051ad5cf9468f19b751623f5c59dfbe177da134d00b7fc1f" exitCode=0 Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.032140 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f756w" event={"ID":"5d3a07cf-a084-46a0-8ca2-830e0838d575","Type":"ContainerDied","Data":"9bb9eedee5db3508051ad5cf9468f19b751623f5c59dfbe177da134d00b7fc1f"} Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.033583 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xlzdb" event={"ID":"6473c7ac-af7d-4556-aa86-28aabc85694a","Type":"ContainerDied","Data":"0726c55d4787049d77ae93d959e7adf39c17f94bb13160b743ceddf464afc9d7"} Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.033631 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0726c55d4787049d77ae93d959e7adf39c17f94bb13160b743ceddf464afc9d7" Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.033669 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.193341 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" path="/var/lib/kubelet/pods/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616/volumes" Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.212938 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.213164 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-log" containerID="cri-o://4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4" gracePeriod=30 Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.213636 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-api" containerID="cri-o://5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9" gracePeriod=30 Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.252603 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.253300 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3e97df52-5201-479d-aae1-ac0c36e3ea63" containerName="nova-scheduler-scheduler" containerID="cri-o://dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0" gracePeriod=30 Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.329838 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.330098 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-log" containerID="cri-o://bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505" gracePeriod=30 Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.330658 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-metadata" containerID="cri-o://321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7" gracePeriod=30 Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.933704 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.049220 4932 generic.go:334] "Generic (PLEG): container finished" podID="a445a66f-1685-4542-89c3-012fef147a76" containerID="4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4" exitCode=143 Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.049289 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a445a66f-1685-4542-89c3-012fef147a76","Type":"ContainerDied","Data":"4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4"} Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.050229 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvdwg\" (UniqueName: \"kubernetes.io/projected/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-kube-api-access-bvdwg\") pod \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.050279 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-config-data\") pod \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.050383 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-logs\") pod \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.050412 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-nova-metadata-tls-certs\") pod \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.050454 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-combined-ca-bundle\") pod \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.051844 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-logs" (OuterVolumeSpecName: "logs") pod "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" (UID: "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.053105 4932 generic.go:334] "Generic (PLEG): container finished" podID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerID="321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7" exitCode=0 Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.053134 4932 generic.go:334] "Generic (PLEG): container finished" podID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerID="bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505" exitCode=143 Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.053619 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.054332 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad","Type":"ContainerDied","Data":"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7"} Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.054382 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad","Type":"ContainerDied","Data":"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505"} Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.054397 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad","Type":"ContainerDied","Data":"ac08c2a96822adae87e627ab8c6ab7ba89e03640acee4a44553d726b259966e3"} Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.054416 4932 scope.go:117] "RemoveContainer" containerID="321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.056511 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-kube-api-access-bvdwg" (OuterVolumeSpecName: "kube-api-access-bvdwg") pod "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" (UID: "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad"). InnerVolumeSpecName "kube-api-access-bvdwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.086318 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" (UID: "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.099514 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-config-data" (OuterVolumeSpecName: "config-data") pod "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" (UID: "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.111732 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" (UID: "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.152573 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.152610 4932 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.152627 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.152638 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvdwg\" (UniqueName: \"kubernetes.io/projected/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-kube-api-access-bvdwg\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.152648 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.195201 4932 scope.go:117] "RemoveContainer" containerID="bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.224400 4932 scope.go:117] "RemoveContainer" containerID="321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7" Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.234724 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7\": container with ID starting with 321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7 not found: ID does not exist" containerID="321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.234781 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7"} err="failed to get container status \"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7\": rpc error: code = NotFound desc = could not find container \"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7\": container with ID starting with 321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7 not found: ID does not exist" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.234806 4932 scope.go:117] "RemoveContainer" containerID="bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505" Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.237244 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505\": container with ID starting with bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505 not found: ID does not exist" containerID="bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.237298 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505"} err="failed to get container status \"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505\": rpc error: code = NotFound desc = could not find container \"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505\": container with ID starting with bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505 not found: ID does not exist" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.237336 4932 scope.go:117] "RemoveContainer" containerID="321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.238047 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7"} err="failed to get container status \"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7\": rpc error: code = NotFound desc = could not find container \"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7\": container with ID starting with 321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7 not found: ID does not exist" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.238089 4932 scope.go:117] "RemoveContainer" containerID="bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.238414 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505"} err="failed to get container status \"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505\": rpc error: code = NotFound desc = could not find container \"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505\": container with ID starting with bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505 not found: ID does not exist" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.440514 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.452605 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.462038 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.490243 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.490805 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-log" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.490832 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-log" Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.490859 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerName="init" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.490867 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerName="init" Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.490885 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d3a07cf-a084-46a0-8ca2-830e0838d575" containerName="nova-cell1-conductor-db-sync" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.490892 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d3a07cf-a084-46a0-8ca2-830e0838d575" containerName="nova-cell1-conductor-db-sync" Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.490908 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6473c7ac-af7d-4556-aa86-28aabc85694a" containerName="nova-manage" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.490916 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6473c7ac-af7d-4556-aa86-28aabc85694a" containerName="nova-manage" Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.490923 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-metadata" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.490930 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-metadata" Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.490952 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerName="dnsmasq-dns" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.490959 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerName="dnsmasq-dns" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.491214 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-metadata" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.491228 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerName="dnsmasq-dns" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.491240 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d3a07cf-a084-46a0-8ca2-830e0838d575" containerName="nova-cell1-conductor-db-sync" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.491251 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-log" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.491266 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6473c7ac-af7d-4556-aa86-28aabc85694a" containerName="nova-manage" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.494578 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.498991 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.509944 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.509963 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.574767 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-scripts\") pod \"5d3a07cf-a084-46a0-8ca2-830e0838d575\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575208 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-config-data\") pod \"5d3a07cf-a084-46a0-8ca2-830e0838d575\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575321 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzw4x\" (UniqueName: \"kubernetes.io/projected/5d3a07cf-a084-46a0-8ca2-830e0838d575-kube-api-access-bzw4x\") pod \"5d3a07cf-a084-46a0-8ca2-830e0838d575\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575364 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-combined-ca-bundle\") pod \"5d3a07cf-a084-46a0-8ca2-830e0838d575\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575662 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575685 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-config-data\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575733 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60cae27-f16b-4874-800f-f94fc2ce849f-logs\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575754 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575848 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frsfw\" (UniqueName: \"kubernetes.io/projected/f60cae27-f16b-4874-800f-f94fc2ce849f-kube-api-access-frsfw\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.584483 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d3a07cf-a084-46a0-8ca2-830e0838d575-kube-api-access-bzw4x" (OuterVolumeSpecName: "kube-api-access-bzw4x") pod "5d3a07cf-a084-46a0-8ca2-830e0838d575" (UID: "5d3a07cf-a084-46a0-8ca2-830e0838d575"). InnerVolumeSpecName "kube-api-access-bzw4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.590288 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-scripts" (OuterVolumeSpecName: "scripts") pod "5d3a07cf-a084-46a0-8ca2-830e0838d575" (UID: "5d3a07cf-a084-46a0-8ca2-830e0838d575"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.642613 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-config-data" (OuterVolumeSpecName: "config-data") pod "5d3a07cf-a084-46a0-8ca2-830e0838d575" (UID: "5d3a07cf-a084-46a0-8ca2-830e0838d575"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.643085 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d3a07cf-a084-46a0-8ca2-830e0838d575" (UID: "5d3a07cf-a084-46a0-8ca2-830e0838d575"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677205 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677274 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-config-data\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677325 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60cae27-f16b-4874-800f-f94fc2ce849f-logs\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677349 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677426 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frsfw\" (UniqueName: \"kubernetes.io/projected/f60cae27-f16b-4874-800f-f94fc2ce849f-kube-api-access-frsfw\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677517 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677530 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzw4x\" (UniqueName: \"kubernetes.io/projected/5d3a07cf-a084-46a0-8ca2-830e0838d575-kube-api-access-bzw4x\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677540 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677549 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.678204 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60cae27-f16b-4874-800f-f94fc2ce849f-logs\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.684002 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.684797 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-config-data\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.685585 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.696839 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frsfw\" (UniqueName: \"kubernetes.io/projected/f60cae27-f16b-4874-800f-f94fc2ce849f-kube-api-access-frsfw\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.826979 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.836571 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.076092 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f756w" event={"ID":"5d3a07cf-a084-46a0-8ca2-830e0838d575","Type":"ContainerDied","Data":"1f840d02307e281dd77d4c46053f73ca0e900cd59304c1e3d8da776e2b0a46e0"} Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.076389 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f840d02307e281dd77d4c46053f73ca0e900cd59304c1e3d8da776e2b0a46e0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.076248 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:56:05 crc kubenswrapper[4932]: E0218 19:56:05.145120 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:56:05 crc kubenswrapper[4932]: E0218 19:56:05.147783 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:56:05 crc kubenswrapper[4932]: E0218 19:56:05.152972 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:56:05 crc kubenswrapper[4932]: E0218 19:56:05.153062 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3e97df52-5201-479d-aae1-ac0c36e3ea63" containerName="nova-scheduler-scheduler" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.166231 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.168311 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.173656 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.191098 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" path="/var/lib/kubelet/pods/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad/volumes" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.191865 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.288580 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be0a105-6011-49ed-9dd4-878f392f4b65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.288811 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6be0a105-6011-49ed-9dd4-878f392f4b65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.288916 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg4n7\" (UniqueName: \"kubernetes.io/projected/6be0a105-6011-49ed-9dd4-878f392f4b65-kube-api-access-vg4n7\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.290160 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.390424 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be0a105-6011-49ed-9dd4-878f392f4b65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.390852 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6be0a105-6011-49ed-9dd4-878f392f4b65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.390923 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg4n7\" (UniqueName: \"kubernetes.io/projected/6be0a105-6011-49ed-9dd4-878f392f4b65-kube-api-access-vg4n7\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.398539 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6be0a105-6011-49ed-9dd4-878f392f4b65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.398747 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be0a105-6011-49ed-9dd4-878f392f4b65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.409334 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg4n7\" (UniqueName: \"kubernetes.io/projected/6be0a105-6011-49ed-9dd4-878f392f4b65-kube-api-access-vg4n7\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.492014 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.951347 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:56:05 crc kubenswrapper[4932]: W0218 19:56:05.963799 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6be0a105_6011_49ed_9dd4_878f392f4b65.slice/crio-2d4391a2ea2da9d4cc6a357e6ff3d5e050955bb371de923ba587adc514c3b332 WatchSource:0}: Error finding container 2d4391a2ea2da9d4cc6a357e6ff3d5e050955bb371de923ba587adc514c3b332: Status 404 returned error can't find the container with id 2d4391a2ea2da9d4cc6a357e6ff3d5e050955bb371de923ba587adc514c3b332 Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.085278 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6be0a105-6011-49ed-9dd4-878f392f4b65","Type":"ContainerStarted","Data":"2d4391a2ea2da9d4cc6a357e6ff3d5e050955bb371de923ba587adc514c3b332"} Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.087155 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f60cae27-f16b-4874-800f-f94fc2ce849f","Type":"ContainerStarted","Data":"9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3"} Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.087209 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f60cae27-f16b-4874-800f-f94fc2ce849f","Type":"ContainerStarted","Data":"e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9"} Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.087219 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f60cae27-f16b-4874-800f-f94fc2ce849f","Type":"ContainerStarted","Data":"6cc5ccb6f5aea2a8c8a0f89d07f1d4998a02b0b51bdac9b6c2ee10d527679c5a"} Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.113033 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.113014867 podStartE2EDuration="2.113014867s" podCreationTimestamp="2026-02-18 19:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:06.105577742 +0000 UTC m=+1329.687532607" watchObservedRunningTime="2026-02-18 19:56:06.113014867 +0000 UTC m=+1329.694969732" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.638779 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.717248 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a445a66f-1685-4542-89c3-012fef147a76-logs\") pod \"a445a66f-1685-4542-89c3-012fef147a76\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.717395 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsfs8\" (UniqueName: \"kubernetes.io/projected/a445a66f-1685-4542-89c3-012fef147a76-kube-api-access-xsfs8\") pod \"a445a66f-1685-4542-89c3-012fef147a76\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.717472 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-combined-ca-bundle\") pod \"a445a66f-1685-4542-89c3-012fef147a76\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.717519 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-config-data\") pod \"a445a66f-1685-4542-89c3-012fef147a76\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.720647 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a445a66f-1685-4542-89c3-012fef147a76-logs" (OuterVolumeSpecName: "logs") pod "a445a66f-1685-4542-89c3-012fef147a76" (UID: "a445a66f-1685-4542-89c3-012fef147a76"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.722544 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a445a66f-1685-4542-89c3-012fef147a76-kube-api-access-xsfs8" (OuterVolumeSpecName: "kube-api-access-xsfs8") pod "a445a66f-1685-4542-89c3-012fef147a76" (UID: "a445a66f-1685-4542-89c3-012fef147a76"). InnerVolumeSpecName "kube-api-access-xsfs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.746680 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a445a66f-1685-4542-89c3-012fef147a76" (UID: "a445a66f-1685-4542-89c3-012fef147a76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.748236 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-config-data" (OuterVolumeSpecName: "config-data") pod "a445a66f-1685-4542-89c3-012fef147a76" (UID: "a445a66f-1685-4542-89c3-012fef147a76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.820100 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsfs8\" (UniqueName: \"kubernetes.io/projected/a445a66f-1685-4542-89c3-012fef147a76-kube-api-access-xsfs8\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.820128 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.820137 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.820147 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a445a66f-1685-4542-89c3-012fef147a76-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.097534 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6be0a105-6011-49ed-9dd4-878f392f4b65","Type":"ContainerStarted","Data":"072752f253b8d8eed502eeeb97f13bf49a8cd71dffb3b05c441f747b46f0a40a"} Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.097610 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.099410 4932 generic.go:334] "Generic (PLEG): container finished" podID="a445a66f-1685-4542-89c3-012fef147a76" containerID="5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9" exitCode=0 Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.099477 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.099511 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a445a66f-1685-4542-89c3-012fef147a76","Type":"ContainerDied","Data":"5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9"} Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.099536 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a445a66f-1685-4542-89c3-012fef147a76","Type":"ContainerDied","Data":"db9aed346e952a35a560a0f801674a02f5a8f28572c2af0ca6ba733c50ec6e31"} Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.099555 4932 scope.go:117] "RemoveContainer" containerID="5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.120914 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.120897739 podStartE2EDuration="2.120897739s" podCreationTimestamp="2026-02-18 19:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:07.113800343 +0000 UTC m=+1330.695755188" watchObservedRunningTime="2026-02-18 19:56:07.120897739 +0000 UTC m=+1330.702852584" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.127117 4932 scope.go:117] "RemoveContainer" containerID="4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.235370 4932 scope.go:117] "RemoveContainer" containerID="5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9" Feb 18 19:56:07 crc kubenswrapper[4932]: E0218 19:56:07.235925 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9\": container with ID starting with 5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9 not found: ID does not exist" containerID="5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.235956 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9"} err="failed to get container status \"5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9\": rpc error: code = NotFound desc = could not find container \"5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9\": container with ID starting with 5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9 not found: ID does not exist" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.235976 4932 scope.go:117] "RemoveContainer" containerID="4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4" Feb 18 19:56:07 crc kubenswrapper[4932]: E0218 19:56:07.236450 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4\": container with ID starting with 4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4 not found: ID does not exist" containerID="4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.236471 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4"} err="failed to get container status \"4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4\": rpc error: code = NotFound desc = could not find container \"4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4\": container with ID starting with 4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4 not found: ID does not exist" Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.119705 4932 generic.go:334] "Generic (PLEG): container finished" podID="3e97df52-5201-479d-aae1-ac0c36e3ea63" containerID="dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0" exitCode=0 Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.119771 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3e97df52-5201-479d-aae1-ac0c36e3ea63","Type":"ContainerDied","Data":"dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0"} Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.177659 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.254767 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jf96\" (UniqueName: \"kubernetes.io/projected/3e97df52-5201-479d-aae1-ac0c36e3ea63-kube-api-access-7jf96\") pod \"3e97df52-5201-479d-aae1-ac0c36e3ea63\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.254858 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-combined-ca-bundle\") pod \"3e97df52-5201-479d-aae1-ac0c36e3ea63\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.255006 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-config-data\") pod \"3e97df52-5201-479d-aae1-ac0c36e3ea63\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.260365 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e97df52-5201-479d-aae1-ac0c36e3ea63-kube-api-access-7jf96" (OuterVolumeSpecName: "kube-api-access-7jf96") pod "3e97df52-5201-479d-aae1-ac0c36e3ea63" (UID: "3e97df52-5201-479d-aae1-ac0c36e3ea63"). InnerVolumeSpecName "kube-api-access-7jf96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.280550 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e97df52-5201-479d-aae1-ac0c36e3ea63" (UID: "3e97df52-5201-479d-aae1-ac0c36e3ea63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.290115 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-config-data" (OuterVolumeSpecName: "config-data") pod "3e97df52-5201-479d-aae1-ac0c36e3ea63" (UID: "3e97df52-5201-479d-aae1-ac0c36e3ea63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.357310 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jf96\" (UniqueName: \"kubernetes.io/projected/3e97df52-5201-479d-aae1-ac0c36e3ea63-kube-api-access-7jf96\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.357351 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.357365 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.133204 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3e97df52-5201-479d-aae1-ac0c36e3ea63","Type":"ContainerDied","Data":"5e68c76538cd952d6f6a3dd14aebb40e0d4b05858a3c9289e0a5ad892f731528"} Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.133651 4932 scope.go:117] "RemoveContainer" containerID="dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.133491 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.205983 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.224055 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.233491 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:09 crc kubenswrapper[4932]: E0218 19:56:09.233963 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-api" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.233980 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-api" Feb 18 19:56:09 crc kubenswrapper[4932]: E0218 19:56:09.234008 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-log" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.234014 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-log" Feb 18 19:56:09 crc kubenswrapper[4932]: E0218 19:56:09.234027 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e97df52-5201-479d-aae1-ac0c36e3ea63" containerName="nova-scheduler-scheduler" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.234035 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e97df52-5201-479d-aae1-ac0c36e3ea63" containerName="nova-scheduler-scheduler" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.234248 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-log" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.234266 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-api" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.234279 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e97df52-5201-479d-aae1-ac0c36e3ea63" containerName="nova-scheduler-scheduler" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.234949 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.237371 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.245482 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.276094 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-config-data\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.276577 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chq2r\" (UniqueName: \"kubernetes.io/projected/73e788d9-865f-453f-bdca-1de3b96af3e7-kube-api-access-chq2r\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.276868 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.378364 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chq2r\" (UniqueName: \"kubernetes.io/projected/73e788d9-865f-453f-bdca-1de3b96af3e7-kube-api-access-chq2r\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.378458 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.378505 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-config-data\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.387781 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.389759 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-config-data\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.407135 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chq2r\" (UniqueName: \"kubernetes.io/projected/73e788d9-865f-453f-bdca-1de3b96af3e7-kube-api-access-chq2r\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.556948 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.677902 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.684126 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="bf2c7a4b-b600-48af-8081-cbb3c729223f" containerName="kube-state-metrics" containerID="cri-o://705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648" gracePeriod=30 Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.827567 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.827610 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.907436 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:09 crc kubenswrapper[4932]: W0218 19:56:09.914452 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73e788d9_865f_453f_bdca_1de3b96af3e7.slice/crio-548cdd4f66aaff2ab06550adbc3a37a85c5a81145cf7ceddc60485f6dfd2fbce WatchSource:0}: Error finding container 548cdd4f66aaff2ab06550adbc3a37a85c5a81145cf7ceddc60485f6dfd2fbce: Status 404 returned error can't find the container with id 548cdd4f66aaff2ab06550adbc3a37a85c5a81145cf7ceddc60485f6dfd2fbce Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.093500 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.151930 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"73e788d9-865f-453f-bdca-1de3b96af3e7","Type":"ContainerStarted","Data":"e9c9b5cc67858791a5222fc4a66b9a665582e43139a0ec213ce7bb500dba90ac"} Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.151979 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"73e788d9-865f-453f-bdca-1de3b96af3e7","Type":"ContainerStarted","Data":"548cdd4f66aaff2ab06550adbc3a37a85c5a81145cf7ceddc60485f6dfd2fbce"} Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.156526 4932 generic.go:334] "Generic (PLEG): container finished" podID="bf2c7a4b-b600-48af-8081-cbb3c729223f" containerID="705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648" exitCode=2 Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.156558 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.156567 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bf2c7a4b-b600-48af-8081-cbb3c729223f","Type":"ContainerDied","Data":"705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648"} Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.156593 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bf2c7a4b-b600-48af-8081-cbb3c729223f","Type":"ContainerDied","Data":"e47d3e77ce83e6731fdca0338e3764007d631b786a20a291b2d3ac30da1a2204"} Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.156614 4932 scope.go:117] "RemoveContainer" containerID="705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.195963 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlzzn\" (UniqueName: \"kubernetes.io/projected/bf2c7a4b-b600-48af-8081-cbb3c729223f-kube-api-access-hlzzn\") pod \"bf2c7a4b-b600-48af-8081-cbb3c729223f\" (UID: \"bf2c7a4b-b600-48af-8081-cbb3c729223f\") " Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.202921 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2c7a4b-b600-48af-8081-cbb3c729223f-kube-api-access-hlzzn" (OuterVolumeSpecName: "kube-api-access-hlzzn") pod "bf2c7a4b-b600-48af-8081-cbb3c729223f" (UID: "bf2c7a4b-b600-48af-8081-cbb3c729223f"). InnerVolumeSpecName "kube-api-access-hlzzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.208685 4932 scope.go:117] "RemoveContainer" containerID="705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648" Feb 18 19:56:10 crc kubenswrapper[4932]: E0218 19:56:10.211374 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648\": container with ID starting with 705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648 not found: ID does not exist" containerID="705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.211437 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648"} err="failed to get container status \"705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648\": rpc error: code = NotFound desc = could not find container \"705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648\": container with ID starting with 705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648 not found: ID does not exist" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.304167 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlzzn\" (UniqueName: \"kubernetes.io/projected/bf2c7a4b-b600-48af-8081-cbb3c729223f-kube-api-access-hlzzn\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.482988 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.482946807 podStartE2EDuration="1.482946807s" podCreationTimestamp="2026-02-18 19:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:10.174380956 +0000 UTC m=+1333.756335821" watchObservedRunningTime="2026-02-18 19:56:10.482946807 +0000 UTC m=+1334.064901642" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.491551 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.502371 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.513947 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:56:10 crc kubenswrapper[4932]: E0218 19:56:10.514386 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2c7a4b-b600-48af-8081-cbb3c729223f" containerName="kube-state-metrics" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.514403 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2c7a4b-b600-48af-8081-cbb3c729223f" containerName="kube-state-metrics" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.514761 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf2c7a4b-b600-48af-8081-cbb3c729223f" containerName="kube-state-metrics" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.515418 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.521160 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.521299 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.534949 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.611953 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcf9t\" (UniqueName: \"kubernetes.io/projected/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-api-access-rcf9t\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.612029 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.612070 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.612298 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.714137 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.714321 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcf9t\" (UniqueName: \"kubernetes.io/projected/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-api-access-rcf9t\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.714387 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.714433 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.719390 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.727661 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.728264 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.736364 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcf9t\" (UniqueName: \"kubernetes.io/projected/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-api-access-rcf9t\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.833393 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.190921 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e97df52-5201-479d-aae1-ac0c36e3ea63" path="/var/lib/kubelet/pods/3e97df52-5201-479d-aae1-ac0c36e3ea63/volumes" Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.191767 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf2c7a4b-b600-48af-8081-cbb3c729223f" path="/var/lib/kubelet/pods/bf2c7a4b-b600-48af-8081-cbb3c729223f/volumes" Feb 18 19:56:11 crc kubenswrapper[4932]: W0218 19:56:11.345198 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod469132ad_f7a9_4208_8f20_42f72f6c6436.slice/crio-2369a1f8d602ae9891dcc8de26b0ef5a4e6faeeefe853a76f5cff2995c77e90b WatchSource:0}: Error finding container 2369a1f8d602ae9891dcc8de26b0ef5a4e6faeeefe853a76f5cff2995c77e90b: Status 404 returned error can't find the container with id 2369a1f8d602ae9891dcc8de26b0ef5a4e6faeeefe853a76f5cff2995c77e90b Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.347805 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.669857 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.670192 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-central-agent" containerID="cri-o://a42357a04eca2427447a527f9b884286ac30d97b8bf59de7d2cd9869618e566a" gracePeriod=30 Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.670375 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-notification-agent" containerID="cri-o://4f612b79e40b95e6fef0e37a0198be25f5c486cd3ca03eaa4c43b2840baeb770" gracePeriod=30 Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.670577 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="sg-core" containerID="cri-o://2697d0543e7fe8649877a6210966590083b7e47b807f2346f64c28d10d502f59" gracePeriod=30 Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.670665 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="proxy-httpd" containerID="cri-o://235fb721bf81fe59350072741d94ffeb2cb2dcf4dda7a36192f0baba9a50695d" gracePeriod=30 Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.192380 4932 generic.go:334] "Generic (PLEG): container finished" podID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerID="235fb721bf81fe59350072741d94ffeb2cb2dcf4dda7a36192f0baba9a50695d" exitCode=0 Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.192849 4932 generic.go:334] "Generic (PLEG): container finished" podID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerID="2697d0543e7fe8649877a6210966590083b7e47b807f2346f64c28d10d502f59" exitCode=2 Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.192862 4932 generic.go:334] "Generic (PLEG): container finished" podID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerID="a42357a04eca2427447a527f9b884286ac30d97b8bf59de7d2cd9869618e566a" exitCode=0 Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.192909 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerDied","Data":"235fb721bf81fe59350072741d94ffeb2cb2dcf4dda7a36192f0baba9a50695d"} Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.192938 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerDied","Data":"2697d0543e7fe8649877a6210966590083b7e47b807f2346f64c28d10d502f59"} Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.192951 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerDied","Data":"a42357a04eca2427447a527f9b884286ac30d97b8bf59de7d2cd9869618e566a"} Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.195529 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"469132ad-f7a9-4208-8f20-42f72f6c6436","Type":"ContainerStarted","Data":"8b3baf41dbbeb5f78bd2df0e9a6349a73b0cf1bfed4ce521e634c36c19ea7208"} Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.195562 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"469132ad-f7a9-4208-8f20-42f72f6c6436","Type":"ContainerStarted","Data":"2369a1f8d602ae9891dcc8de26b0ef5a4e6faeeefe853a76f5cff2995c77e90b"} Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.196768 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.222582 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.714197388 podStartE2EDuration="2.222557334s" podCreationTimestamp="2026-02-18 19:56:10 +0000 UTC" firstStartedPulling="2026-02-18 19:56:11.348794367 +0000 UTC m=+1334.930749212" lastFinishedPulling="2026-02-18 19:56:11.857154303 +0000 UTC m=+1335.439109158" observedRunningTime="2026-02-18 19:56:12.211049708 +0000 UTC m=+1335.793004573" watchObservedRunningTime="2026-02-18 19:56:12.222557334 +0000 UTC m=+1335.804512179" Feb 18 19:56:14 crc kubenswrapper[4932]: I0218 19:56:14.557299 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:56:14 crc kubenswrapper[4932]: I0218 19:56:14.828165 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:56:14 crc kubenswrapper[4932]: I0218 19:56:14.828236 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:56:15 crc kubenswrapper[4932]: I0218 19:56:15.532352 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:15 crc kubenswrapper[4932]: I0218 19:56:15.840406 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:56:15 crc kubenswrapper[4932]: I0218 19:56:15.840432 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.271847 4932 generic.go:334] "Generic (PLEG): container finished" podID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerID="4f612b79e40b95e6fef0e37a0198be25f5c486cd3ca03eaa4c43b2840baeb770" exitCode=0 Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.272505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerDied","Data":"4f612b79e40b95e6fef0e37a0198be25f5c486cd3ca03eaa4c43b2840baeb770"} Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.500029 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.603818 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-scripts\") pod \"f22c0acb-8789-4ba1-8e45-8e456165db99\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.605011 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-sg-core-conf-yaml\") pod \"f22c0acb-8789-4ba1-8e45-8e456165db99\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.605159 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5x86\" (UniqueName: \"kubernetes.io/projected/f22c0acb-8789-4ba1-8e45-8e456165db99-kube-api-access-k5x86\") pod \"f22c0acb-8789-4ba1-8e45-8e456165db99\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.605307 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-combined-ca-bundle\") pod \"f22c0acb-8789-4ba1-8e45-8e456165db99\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.605402 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-config-data\") pod \"f22c0acb-8789-4ba1-8e45-8e456165db99\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.605506 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-log-httpd\") pod \"f22c0acb-8789-4ba1-8e45-8e456165db99\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.605660 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-run-httpd\") pod \"f22c0acb-8789-4ba1-8e45-8e456165db99\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.605944 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f22c0acb-8789-4ba1-8e45-8e456165db99" (UID: "f22c0acb-8789-4ba1-8e45-8e456165db99"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.606032 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f22c0acb-8789-4ba1-8e45-8e456165db99" (UID: "f22c0acb-8789-4ba1-8e45-8e456165db99"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.606554 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.606656 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.620328 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-scripts" (OuterVolumeSpecName: "scripts") pod "f22c0acb-8789-4ba1-8e45-8e456165db99" (UID: "f22c0acb-8789-4ba1-8e45-8e456165db99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.620436 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22c0acb-8789-4ba1-8e45-8e456165db99-kube-api-access-k5x86" (OuterVolumeSpecName: "kube-api-access-k5x86") pod "f22c0acb-8789-4ba1-8e45-8e456165db99" (UID: "f22c0acb-8789-4ba1-8e45-8e456165db99"). InnerVolumeSpecName "kube-api-access-k5x86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.645379 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f22c0acb-8789-4ba1-8e45-8e456165db99" (UID: "f22c0acb-8789-4ba1-8e45-8e456165db99"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.701147 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f22c0acb-8789-4ba1-8e45-8e456165db99" (UID: "f22c0acb-8789-4ba1-8e45-8e456165db99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.706342 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-config-data" (OuterVolumeSpecName: "config-data") pod "f22c0acb-8789-4ba1-8e45-8e456165db99" (UID: "f22c0acb-8789-4ba1-8e45-8e456165db99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.709117 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.709452 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.709531 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.709619 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.709691 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5x86\" (UniqueName: \"kubernetes.io/projected/f22c0acb-8789-4ba1-8e45-8e456165db99-kube-api-access-k5x86\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.284914 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerDied","Data":"a620663d217c47ddd4628558591f9269acb00cf7b394dcbb5dec8251391d19e8"} Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.285319 4932 scope.go:117] "RemoveContainer" containerID="235fb721bf81fe59350072741d94ffeb2cb2dcf4dda7a36192f0baba9a50695d" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.285001 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.311396 4932 scope.go:117] "RemoveContainer" containerID="2697d0543e7fe8649877a6210966590083b7e47b807f2346f64c28d10d502f59" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.321365 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.343920 4932 scope.go:117] "RemoveContainer" containerID="4f612b79e40b95e6fef0e37a0198be25f5c486cd3ca03eaa4c43b2840baeb770" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.349037 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.362341 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:19 crc kubenswrapper[4932]: E0218 19:56:19.362925 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="sg-core" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.362947 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="sg-core" Feb 18 19:56:19 crc kubenswrapper[4932]: E0218 19:56:19.362964 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-central-agent" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.362973 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-central-agent" Feb 18 19:56:19 crc kubenswrapper[4932]: E0218 19:56:19.362997 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="proxy-httpd" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.363005 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="proxy-httpd" Feb 18 19:56:19 crc kubenswrapper[4932]: E0218 19:56:19.363024 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-notification-agent" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.363032 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-notification-agent" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.363303 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="proxy-httpd" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.363321 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-notification-agent" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.363348 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="sg-core" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.363361 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-central-agent" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.365649 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.368866 4932 scope.go:117] "RemoveContainer" containerID="a42357a04eca2427447a527f9b884286ac30d97b8bf59de7d2cd9869618e566a" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.368876 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.369070 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.371741 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.374036 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423015 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-run-httpd\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423086 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423115 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-log-httpd\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423156 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423221 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bvgl\" (UniqueName: \"kubernetes.io/projected/07d7be76-f5d6-4280-8009-01c1db25ee6e-kube-api-access-2bvgl\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423263 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-config-data\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423309 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-scripts\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423408 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525432 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525483 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-run-httpd\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525514 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525531 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-log-httpd\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525559 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525594 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bvgl\" (UniqueName: \"kubernetes.io/projected/07d7be76-f5d6-4280-8009-01c1db25ee6e-kube-api-access-2bvgl\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525622 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-config-data\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525654 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-scripts\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.526055 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-run-httpd\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.526380 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-log-httpd\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.530394 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-config-data\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.530527 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-scripts\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.530549 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.530644 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.530935 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.549561 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bvgl\" (UniqueName: \"kubernetes.io/projected/07d7be76-f5d6-4280-8009-01c1db25ee6e-kube-api-access-2bvgl\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.557192 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.592563 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.693802 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:56:20 crc kubenswrapper[4932]: I0218 19:56:20.184633 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:20 crc kubenswrapper[4932]: I0218 19:56:20.297334 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerStarted","Data":"bed094a29d5cc735d8b58329a9d581210c267db550c3be7eeb9923193dc084eb"} Feb 18 19:56:20 crc kubenswrapper[4932]: I0218 19:56:20.327227 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:56:20 crc kubenswrapper[4932]: I0218 19:56:20.845101 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 19:56:21 crc kubenswrapper[4932]: I0218 19:56:21.196096 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" path="/var/lib/kubelet/pods/f22c0acb-8789-4ba1-8e45-8e456165db99/volumes" Feb 18 19:56:21 crc kubenswrapper[4932]: I0218 19:56:21.311813 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerStarted","Data":"beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504"} Feb 18 19:56:21 crc kubenswrapper[4932]: I0218 19:56:21.311868 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerStarted","Data":"d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea"} Feb 18 19:56:22 crc kubenswrapper[4932]: I0218 19:56:22.322505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerStarted","Data":"89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c"} Feb 18 19:56:24 crc kubenswrapper[4932]: I0218 19:56:24.356992 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerStarted","Data":"c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39"} Feb 18 19:56:24 crc kubenswrapper[4932]: I0218 19:56:24.358681 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:56:24 crc kubenswrapper[4932]: I0218 19:56:24.386419 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.280531854 podStartE2EDuration="5.386398629s" podCreationTimestamp="2026-02-18 19:56:19 +0000 UTC" firstStartedPulling="2026-02-18 19:56:20.198926653 +0000 UTC m=+1343.780881508" lastFinishedPulling="2026-02-18 19:56:23.304793438 +0000 UTC m=+1346.886748283" observedRunningTime="2026-02-18 19:56:24.381266632 +0000 UTC m=+1347.963221487" watchObservedRunningTime="2026-02-18 19:56:24.386398629 +0000 UTC m=+1347.968353474" Feb 18 19:56:24 crc kubenswrapper[4932]: I0218 19:56:24.836657 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:56:24 crc kubenswrapper[4932]: I0218 19:56:24.837257 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:56:24 crc kubenswrapper[4932]: I0218 19:56:24.842891 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:56:25 crc kubenswrapper[4932]: I0218 19:56:25.376046 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.380408 4932 generic.go:334] "Generic (PLEG): container finished" podID="59185a09-938b-47ba-99ed-1b81362038e0" containerID="f581b8c9ce44e42d3ff03f376a0f68bc8c6d3dd65d58f6d7b80411f3452dd5a6" exitCode=137 Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.380539 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"59185a09-938b-47ba-99ed-1b81362038e0","Type":"ContainerDied","Data":"f581b8c9ce44e42d3ff03f376a0f68bc8c6d3dd65d58f6d7b80411f3452dd5a6"} Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.383411 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"59185a09-938b-47ba-99ed-1b81362038e0","Type":"ContainerDied","Data":"372bb3654ce51919b696e3d9eb989784a6ab397c40b87b62e7e2d42b5443d7b8"} Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.383444 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="372bb3654ce51919b696e3d9eb989784a6ab397c40b87b62e7e2d42b5443d7b8" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.461608 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.590245 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-combined-ca-bundle\") pod \"59185a09-938b-47ba-99ed-1b81362038e0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.590453 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb946\" (UniqueName: \"kubernetes.io/projected/59185a09-938b-47ba-99ed-1b81362038e0-kube-api-access-cb946\") pod \"59185a09-938b-47ba-99ed-1b81362038e0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.590687 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-config-data\") pod \"59185a09-938b-47ba-99ed-1b81362038e0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.596575 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59185a09-938b-47ba-99ed-1b81362038e0-kube-api-access-cb946" (OuterVolumeSpecName: "kube-api-access-cb946") pod "59185a09-938b-47ba-99ed-1b81362038e0" (UID: "59185a09-938b-47ba-99ed-1b81362038e0"). InnerVolumeSpecName "kube-api-access-cb946". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.617918 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59185a09-938b-47ba-99ed-1b81362038e0" (UID: "59185a09-938b-47ba-99ed-1b81362038e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.621061 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-config-data" (OuterVolumeSpecName: "config-data") pod "59185a09-938b-47ba-99ed-1b81362038e0" (UID: "59185a09-938b-47ba-99ed-1b81362038e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.694189 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.694227 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb946\" (UniqueName: \"kubernetes.io/projected/59185a09-938b-47ba-99ed-1b81362038e0-kube-api-access-cb946\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.694241 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.392335 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.423159 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.434920 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.445726 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:56:27 crc kubenswrapper[4932]: E0218 19:56:27.446527 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59185a09-938b-47ba-99ed-1b81362038e0" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.446562 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="59185a09-938b-47ba-99ed-1b81362038e0" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.446891 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="59185a09-938b-47ba-99ed-1b81362038e0" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.448102 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.451306 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.452978 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.455316 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.463395 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.518139 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6v4z\" (UniqueName: \"kubernetes.io/projected/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-kube-api-access-z6v4z\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.518282 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.518401 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.518690 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.518737 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.606328 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.606401 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.606457 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.607676 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"691ac26b2e0eb4976dab73dc438ad2163dc0ad731157e8dbe0e2c19541cba856"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.607768 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://691ac26b2e0eb4976dab73dc438ad2163dc0ad731157e8dbe0e2c19541cba856" gracePeriod=600 Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.621109 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6v4z\" (UniqueName: \"kubernetes.io/projected/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-kube-api-access-z6v4z\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.621200 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.621287 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.621405 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.621434 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.627278 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.627299 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.629790 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.630713 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.646497 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6v4z\" (UniqueName: \"kubernetes.io/projected/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-kube-api-access-z6v4z\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.768629 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:28 crc kubenswrapper[4932]: I0218 19:56:28.293918 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:56:28 crc kubenswrapper[4932]: W0218 19:56:28.301023 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0af656d_88c5_4f09_bb21_d7b1d6f85ec7.slice/crio-276393bcf286239081abcb5960133251e62c31ff6a18510de26e14a339a095e8 WatchSource:0}: Error finding container 276393bcf286239081abcb5960133251e62c31ff6a18510de26e14a339a095e8: Status 404 returned error can't find the container with id 276393bcf286239081abcb5960133251e62c31ff6a18510de26e14a339a095e8 Feb 18 19:56:28 crc kubenswrapper[4932]: I0218 19:56:28.407902 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7","Type":"ContainerStarted","Data":"276393bcf286239081abcb5960133251e62c31ff6a18510de26e14a339a095e8"} Feb 18 19:56:28 crc kubenswrapper[4932]: I0218 19:56:28.414049 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="691ac26b2e0eb4976dab73dc438ad2163dc0ad731157e8dbe0e2c19541cba856" exitCode=0 Feb 18 19:56:28 crc kubenswrapper[4932]: I0218 19:56:28.414104 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"691ac26b2e0eb4976dab73dc438ad2163dc0ad731157e8dbe0e2c19541cba856"} Feb 18 19:56:28 crc kubenswrapper[4932]: I0218 19:56:28.414137 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d"} Feb 18 19:56:28 crc kubenswrapper[4932]: I0218 19:56:28.414157 4932 scope.go:117] "RemoveContainer" containerID="435f6d4431c63fe1b1d0a709b03d86681659a5d37fb618d6ab36ba1010fce349" Feb 18 19:56:29 crc kubenswrapper[4932]: I0218 19:56:29.214067 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59185a09-938b-47ba-99ed-1b81362038e0" path="/var/lib/kubelet/pods/59185a09-938b-47ba-99ed-1b81362038e0/volumes" Feb 18 19:56:29 crc kubenswrapper[4932]: I0218 19:56:29.429710 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7","Type":"ContainerStarted","Data":"675ee2536c11f30aaee26a576c5150d248f738a44e5ecfaa73a2f894e21b79b7"} Feb 18 19:56:29 crc kubenswrapper[4932]: I0218 19:56:29.457570 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.457554697 podStartE2EDuration="2.457554697s" podCreationTimestamp="2026-02-18 19:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:29.456124152 +0000 UTC m=+1353.038079007" watchObservedRunningTime="2026-02-18 19:56:29.457554697 +0000 UTC m=+1353.039509542" Feb 18 19:56:32 crc kubenswrapper[4932]: I0218 19:56:32.769210 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.161501 4932 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poda445a66f-1685-4542-89c3-012fef147a76"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poda445a66f-1685-4542-89c3-012fef147a76] : Timed out while waiting for systemd to remove kubepods-besteffort-poda445a66f_1685_4542_89c3_012fef147a76.slice" Feb 18 19:56:37 crc kubenswrapper[4932]: E0218 19:56:37.162014 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort poda445a66f-1685-4542-89c3-012fef147a76] : unable to destroy cgroup paths for cgroup [kubepods besteffort poda445a66f-1685-4542-89c3-012fef147a76] : Timed out while waiting for systemd to remove kubepods-besteffort-poda445a66f_1685_4542_89c3_012fef147a76.slice" pod="openstack/nova-api-0" podUID="a445a66f-1685-4542-89c3-012fef147a76" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.509855 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.542091 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.554655 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.568245 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.570485 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.579895 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.609789 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.626064 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-config-data\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.626126 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.626163 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ecfead8-d016-48b2-bf3f-f3583a73b86c-logs\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.626468 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw78k\" (UniqueName: \"kubernetes.io/projected/7ecfead8-d016-48b2-bf3f-f3583a73b86c-kube-api-access-bw78k\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.728604 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw78k\" (UniqueName: \"kubernetes.io/projected/7ecfead8-d016-48b2-bf3f-f3583a73b86c-kube-api-access-bw78k\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.728825 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-config-data\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.728861 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.728904 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ecfead8-d016-48b2-bf3f-f3583a73b86c-logs\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.729565 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ecfead8-d016-48b2-bf3f-f3583a73b86c-logs\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.737340 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-config-data\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.749056 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.755770 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw78k\" (UniqueName: \"kubernetes.io/projected/7ecfead8-d016-48b2-bf3f-f3583a73b86c-kube-api-access-bw78k\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.769586 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.799919 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.919263 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.383066 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:38 crc kubenswrapper[4932]: W0218 19:56:38.383698 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ecfead8_d016_48b2_bf3f_f3583a73b86c.slice/crio-19ffa197778275bcd9f48f91d00a44722c32c65bb9ad6a8add9bdfb08abe1a4f WatchSource:0}: Error finding container 19ffa197778275bcd9f48f91d00a44722c32c65bb9ad6a8add9bdfb08abe1a4f: Status 404 returned error can't find the container with id 19ffa197778275bcd9f48f91d00a44722c32c65bb9ad6a8add9bdfb08abe1a4f Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.532794 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ecfead8-d016-48b2-bf3f-f3583a73b86c","Type":"ContainerStarted","Data":"19ffa197778275bcd9f48f91d00a44722c32c65bb9ad6a8add9bdfb08abe1a4f"} Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.569299 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.799718 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-kf5w6"] Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.801232 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.804872 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.805109 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.815193 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-kf5w6"] Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.858492 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-config-data\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.858685 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmcwn\" (UniqueName: \"kubernetes.io/projected/738744b3-86e1-432c-8380-0d428a2e8263-kube-api-access-cmcwn\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.858787 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-scripts\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.858911 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.960808 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-config-data\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.960940 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmcwn\" (UniqueName: \"kubernetes.io/projected/738744b3-86e1-432c-8380-0d428a2e8263-kube-api-access-cmcwn\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.960993 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-scripts\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.961049 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.965694 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-config-data\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.966347 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.970903 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-scripts\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.981819 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmcwn\" (UniqueName: \"kubernetes.io/projected/738744b3-86e1-432c-8380-0d428a2e8263-kube-api-access-cmcwn\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:39 crc kubenswrapper[4932]: I0218 19:56:39.125652 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:39 crc kubenswrapper[4932]: I0218 19:56:39.192609 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a445a66f-1685-4542-89c3-012fef147a76" path="/var/lib/kubelet/pods/a445a66f-1685-4542-89c3-012fef147a76/volumes" Feb 18 19:56:39 crc kubenswrapper[4932]: I0218 19:56:39.540655 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ecfead8-d016-48b2-bf3f-f3583a73b86c","Type":"ContainerStarted","Data":"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03"} Feb 18 19:56:39 crc kubenswrapper[4932]: I0218 19:56:39.540925 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ecfead8-d016-48b2-bf3f-f3583a73b86c","Type":"ContainerStarted","Data":"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed"} Feb 18 19:56:39 crc kubenswrapper[4932]: I0218 19:56:39.597311 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.597212668 podStartE2EDuration="2.597212668s" podCreationTimestamp="2026-02-18 19:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:39.595599778 +0000 UTC m=+1363.177554633" watchObservedRunningTime="2026-02-18 19:56:39.597212668 +0000 UTC m=+1363.179167523" Feb 18 19:56:39 crc kubenswrapper[4932]: I0218 19:56:39.625003 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-kf5w6"] Feb 18 19:56:40 crc kubenswrapper[4932]: I0218 19:56:40.549990 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kf5w6" event={"ID":"738744b3-86e1-432c-8380-0d428a2e8263","Type":"ContainerStarted","Data":"e000a4553afc7ad7dbb58680bc4724da86a258372aee2e0c10f7e863173c5a10"} Feb 18 19:56:40 crc kubenswrapper[4932]: I0218 19:56:40.550533 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kf5w6" event={"ID":"738744b3-86e1-432c-8380-0d428a2e8263","Type":"ContainerStarted","Data":"145cdbac6aac3a6ef91498bcc0059c99e75808662cbf2f4f042a83ee54006140"} Feb 18 19:56:40 crc kubenswrapper[4932]: I0218 19:56:40.569820 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-kf5w6" podStartSLOduration=2.5697963 podStartE2EDuration="2.5697963s" podCreationTimestamp="2026-02-18 19:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:40.56341705 +0000 UTC m=+1364.145371905" watchObservedRunningTime="2026-02-18 19:56:40.5697963 +0000 UTC m=+1364.151751155" Feb 18 19:56:45 crc kubenswrapper[4932]: I0218 19:56:45.616756 4932 generic.go:334] "Generic (PLEG): container finished" podID="738744b3-86e1-432c-8380-0d428a2e8263" containerID="e000a4553afc7ad7dbb58680bc4724da86a258372aee2e0c10f7e863173c5a10" exitCode=0 Feb 18 19:56:45 crc kubenswrapper[4932]: I0218 19:56:45.616887 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kf5w6" event={"ID":"738744b3-86e1-432c-8380-0d428a2e8263","Type":"ContainerDied","Data":"e000a4553afc7ad7dbb58680bc4724da86a258372aee2e0c10f7e863173c5a10"} Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.029281 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.153330 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-combined-ca-bundle\") pod \"738744b3-86e1-432c-8380-0d428a2e8263\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.153486 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-scripts\") pod \"738744b3-86e1-432c-8380-0d428a2e8263\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.153601 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-config-data\") pod \"738744b3-86e1-432c-8380-0d428a2e8263\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.153698 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmcwn\" (UniqueName: \"kubernetes.io/projected/738744b3-86e1-432c-8380-0d428a2e8263-kube-api-access-cmcwn\") pod \"738744b3-86e1-432c-8380-0d428a2e8263\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.160342 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-scripts" (OuterVolumeSpecName: "scripts") pod "738744b3-86e1-432c-8380-0d428a2e8263" (UID: "738744b3-86e1-432c-8380-0d428a2e8263"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.163497 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/738744b3-86e1-432c-8380-0d428a2e8263-kube-api-access-cmcwn" (OuterVolumeSpecName: "kube-api-access-cmcwn") pod "738744b3-86e1-432c-8380-0d428a2e8263" (UID: "738744b3-86e1-432c-8380-0d428a2e8263"). InnerVolumeSpecName "kube-api-access-cmcwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.190442 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-config-data" (OuterVolumeSpecName: "config-data") pod "738744b3-86e1-432c-8380-0d428a2e8263" (UID: "738744b3-86e1-432c-8380-0d428a2e8263"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.194766 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "738744b3-86e1-432c-8380-0d428a2e8263" (UID: "738744b3-86e1-432c-8380-0d428a2e8263"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.255729 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmcwn\" (UniqueName: \"kubernetes.io/projected/738744b3-86e1-432c-8380-0d428a2e8263-kube-api-access-cmcwn\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.255759 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.255768 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.255777 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.637549 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kf5w6" event={"ID":"738744b3-86e1-432c-8380-0d428a2e8263","Type":"ContainerDied","Data":"145cdbac6aac3a6ef91498bcc0059c99e75808662cbf2f4f042a83ee54006140"} Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.637591 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="145cdbac6aac3a6ef91498bcc0059c99e75808662cbf2f4f042a83ee54006140" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.637608 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.828987 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.829273 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-log" containerID="cri-o://7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed" gracePeriod=30 Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.829374 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-api" containerID="cri-o://2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03" gracePeriod=30 Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.843958 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.844247 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="73e788d9-865f-453f-bdca-1de3b96af3e7" containerName="nova-scheduler-scheduler" containerID="cri-o://e9c9b5cc67858791a5222fc4a66b9a665582e43139a0ec213ce7bb500dba90ac" gracePeriod=30 Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.863621 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.864187 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-log" containerID="cri-o://e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9" gracePeriod=30 Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.864309 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-metadata" containerID="cri-o://9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3" gracePeriod=30 Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.493805 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.583977 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-combined-ca-bundle\") pod \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.584055 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw78k\" (UniqueName: \"kubernetes.io/projected/7ecfead8-d016-48b2-bf3f-f3583a73b86c-kube-api-access-bw78k\") pod \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.584218 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ecfead8-d016-48b2-bf3f-f3583a73b86c-logs\") pod \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.584306 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-config-data\") pod \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.585067 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ecfead8-d016-48b2-bf3f-f3583a73b86c-logs" (OuterVolumeSpecName: "logs") pod "7ecfead8-d016-48b2-bf3f-f3583a73b86c" (UID: "7ecfead8-d016-48b2-bf3f-f3583a73b86c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.589092 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ecfead8-d016-48b2-bf3f-f3583a73b86c-kube-api-access-bw78k" (OuterVolumeSpecName: "kube-api-access-bw78k") pod "7ecfead8-d016-48b2-bf3f-f3583a73b86c" (UID: "7ecfead8-d016-48b2-bf3f-f3583a73b86c"). InnerVolumeSpecName "kube-api-access-bw78k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.613887 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-config-data" (OuterVolumeSpecName: "config-data") pod "7ecfead8-d016-48b2-bf3f-f3583a73b86c" (UID: "7ecfead8-d016-48b2-bf3f-f3583a73b86c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.620643 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ecfead8-d016-48b2-bf3f-f3583a73b86c" (UID: "7ecfead8-d016-48b2-bf3f-f3583a73b86c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.658502 4932 generic.go:334] "Generic (PLEG): container finished" podID="73e788d9-865f-453f-bdca-1de3b96af3e7" containerID="e9c9b5cc67858791a5222fc4a66b9a665582e43139a0ec213ce7bb500dba90ac" exitCode=0 Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.658561 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"73e788d9-865f-453f-bdca-1de3b96af3e7","Type":"ContainerDied","Data":"e9c9b5cc67858791a5222fc4a66b9a665582e43139a0ec213ce7bb500dba90ac"} Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.664553 4932 generic.go:334] "Generic (PLEG): container finished" podID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerID="e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9" exitCode=143 Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.664668 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f60cae27-f16b-4874-800f-f94fc2ce849f","Type":"ContainerDied","Data":"e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9"} Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.667618 4932 generic.go:334] "Generic (PLEG): container finished" podID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerID="2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03" exitCode=0 Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.667727 4932 generic.go:334] "Generic (PLEG): container finished" podID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerID="7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed" exitCode=143 Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.667798 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ecfead8-d016-48b2-bf3f-f3583a73b86c","Type":"ContainerDied","Data":"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03"} Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.667887 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ecfead8-d016-48b2-bf3f-f3583a73b86c","Type":"ContainerDied","Data":"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed"} Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.667965 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ecfead8-d016-48b2-bf3f-f3583a73b86c","Type":"ContainerDied","Data":"19ffa197778275bcd9f48f91d00a44722c32c65bb9ad6a8add9bdfb08abe1a4f"} Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.668045 4932 scope.go:117] "RemoveContainer" containerID="2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.668285 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.686869 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.686904 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw78k\" (UniqueName: \"kubernetes.io/projected/7ecfead8-d016-48b2-bf3f-f3583a73b86c-kube-api-access-bw78k\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.686919 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ecfead8-d016-48b2-bf3f-f3583a73b86c-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.686930 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.705490 4932 scope.go:117] "RemoveContainer" containerID="7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.747421 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.755673 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.774944 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:48 crc kubenswrapper[4932]: E0218 19:56:48.775610 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-api" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.775636 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-api" Feb 18 19:56:48 crc kubenswrapper[4932]: E0218 19:56:48.775684 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-log" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.775693 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-log" Feb 18 19:56:48 crc kubenswrapper[4932]: E0218 19:56:48.775705 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738744b3-86e1-432c-8380-0d428a2e8263" containerName="nova-manage" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.775713 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="738744b3-86e1-432c-8380-0d428a2e8263" containerName="nova-manage" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.775937 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="738744b3-86e1-432c-8380-0d428a2e8263" containerName="nova-manage" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.775963 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-log" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.775978 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-api" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.777306 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.784578 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.790848 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.800989 4932 scope.go:117] "RemoveContainer" containerID="2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03" Feb 18 19:56:48 crc kubenswrapper[4932]: E0218 19:56:48.801560 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03\": container with ID starting with 2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03 not found: ID does not exist" containerID="2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.801617 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03"} err="failed to get container status \"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03\": rpc error: code = NotFound desc = could not find container \"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03\": container with ID starting with 2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03 not found: ID does not exist" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.801656 4932 scope.go:117] "RemoveContainer" containerID="7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed" Feb 18 19:56:48 crc kubenswrapper[4932]: E0218 19:56:48.802019 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed\": container with ID starting with 7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed not found: ID does not exist" containerID="7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.802056 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed"} err="failed to get container status \"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed\": rpc error: code = NotFound desc = could not find container \"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed\": container with ID starting with 7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed not found: ID does not exist" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.802082 4932 scope.go:117] "RemoveContainer" containerID="2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.802402 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03"} err="failed to get container status \"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03\": rpc error: code = NotFound desc = could not find container \"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03\": container with ID starting with 2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03 not found: ID does not exist" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.802446 4932 scope.go:117] "RemoveContainer" containerID="7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.802734 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed"} err="failed to get container status \"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed\": rpc error: code = NotFound desc = could not find container \"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed\": container with ID starting with 7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed not found: ID does not exist" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.841858 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.891710 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-logs\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.891841 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.891933 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqmmj\" (UniqueName: \"kubernetes.io/projected/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-kube-api-access-xqmmj\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.891969 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-config-data\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.993702 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-combined-ca-bundle\") pod \"73e788d9-865f-453f-bdca-1de3b96af3e7\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.993818 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-config-data\") pod \"73e788d9-865f-453f-bdca-1de3b96af3e7\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.993924 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chq2r\" (UniqueName: \"kubernetes.io/projected/73e788d9-865f-453f-bdca-1de3b96af3e7-kube-api-access-chq2r\") pod \"73e788d9-865f-453f-bdca-1de3b96af3e7\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.994368 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.994497 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqmmj\" (UniqueName: \"kubernetes.io/projected/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-kube-api-access-xqmmj\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.994551 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-config-data\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.994741 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-logs\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.995272 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-logs\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:48.999774 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.000025 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e788d9-865f-453f-bdca-1de3b96af3e7-kube-api-access-chq2r" (OuterVolumeSpecName: "kube-api-access-chq2r") pod "73e788d9-865f-453f-bdca-1de3b96af3e7" (UID: "73e788d9-865f-453f-bdca-1de3b96af3e7"). InnerVolumeSpecName "kube-api-access-chq2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.010764 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-config-data\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.024261 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqmmj\" (UniqueName: \"kubernetes.io/projected/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-kube-api-access-xqmmj\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.027519 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-config-data" (OuterVolumeSpecName: "config-data") pod "73e788d9-865f-453f-bdca-1de3b96af3e7" (UID: "73e788d9-865f-453f-bdca-1de3b96af3e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.058583 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73e788d9-865f-453f-bdca-1de3b96af3e7" (UID: "73e788d9-865f-453f-bdca-1de3b96af3e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.095954 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.095984 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.095993 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chq2r\" (UniqueName: \"kubernetes.io/projected/73e788d9-865f-453f-bdca-1de3b96af3e7-kube-api-access-chq2r\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.138516 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.193743 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" path="/var/lib/kubelet/pods/7ecfead8-d016-48b2-bf3f-f3583a73b86c/volumes" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.215859 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.300220 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-nova-metadata-tls-certs\") pod \"f60cae27-f16b-4874-800f-f94fc2ce849f\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.300541 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frsfw\" (UniqueName: \"kubernetes.io/projected/f60cae27-f16b-4874-800f-f94fc2ce849f-kube-api-access-frsfw\") pod \"f60cae27-f16b-4874-800f-f94fc2ce849f\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.300707 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-config-data\") pod \"f60cae27-f16b-4874-800f-f94fc2ce849f\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.300774 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-combined-ca-bundle\") pod \"f60cae27-f16b-4874-800f-f94fc2ce849f\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.300868 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60cae27-f16b-4874-800f-f94fc2ce849f-logs\") pod \"f60cae27-f16b-4874-800f-f94fc2ce849f\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.301377 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f60cae27-f16b-4874-800f-f94fc2ce849f-logs" (OuterVolumeSpecName: "logs") pod "f60cae27-f16b-4874-800f-f94fc2ce849f" (UID: "f60cae27-f16b-4874-800f-f94fc2ce849f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.302590 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60cae27-f16b-4874-800f-f94fc2ce849f-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.304158 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60cae27-f16b-4874-800f-f94fc2ce849f-kube-api-access-frsfw" (OuterVolumeSpecName: "kube-api-access-frsfw") pod "f60cae27-f16b-4874-800f-f94fc2ce849f" (UID: "f60cae27-f16b-4874-800f-f94fc2ce849f"). InnerVolumeSpecName "kube-api-access-frsfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.333422 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f60cae27-f16b-4874-800f-f94fc2ce849f" (UID: "f60cae27-f16b-4874-800f-f94fc2ce849f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.347296 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-config-data" (OuterVolumeSpecName: "config-data") pod "f60cae27-f16b-4874-800f-f94fc2ce849f" (UID: "f60cae27-f16b-4874-800f-f94fc2ce849f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.371418 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f60cae27-f16b-4874-800f-f94fc2ce849f" (UID: "f60cae27-f16b-4874-800f-f94fc2ce849f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.404113 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.404139 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.404150 4932 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.404161 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frsfw\" (UniqueName: \"kubernetes.io/projected/f60cae27-f16b-4874-800f-f94fc2ce849f-kube-api-access-frsfw\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.639151 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.681294 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"73e788d9-865f-453f-bdca-1de3b96af3e7","Type":"ContainerDied","Data":"548cdd4f66aaff2ab06550adbc3a37a85c5a81145cf7ceddc60485f6dfd2fbce"} Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.681339 4932 scope.go:117] "RemoveContainer" containerID="e9c9b5cc67858791a5222fc4a66b9a665582e43139a0ec213ce7bb500dba90ac" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.681415 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.686878 4932 generic.go:334] "Generic (PLEG): container finished" podID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerID="9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3" exitCode=0 Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.686935 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.686956 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f60cae27-f16b-4874-800f-f94fc2ce849f","Type":"ContainerDied","Data":"9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3"} Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.686986 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f60cae27-f16b-4874-800f-f94fc2ce849f","Type":"ContainerDied","Data":"6cc5ccb6f5aea2a8c8a0f89d07f1d4998a02b0b51bdac9b6c2ee10d527679c5a"} Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.693229 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99f0bb69-5596-4997-b53f-9ceb9aa7cac1","Type":"ContainerStarted","Data":"d44a0859e6bc1ca146456cd319c226c1c97e6918ba7cf2e5b3fea2ceb5f507ac"} Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.708126 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.716441 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.740024 4932 scope.go:117] "RemoveContainer" containerID="9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.742338 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: E0218 19:56:49.742695 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-log" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.742707 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-log" Feb 18 19:56:49 crc kubenswrapper[4932]: E0218 19:56:49.742750 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-metadata" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.742757 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-metadata" Feb 18 19:56:49 crc kubenswrapper[4932]: E0218 19:56:49.742777 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e788d9-865f-453f-bdca-1de3b96af3e7" containerName="nova-scheduler-scheduler" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.742783 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e788d9-865f-453f-bdca-1de3b96af3e7" containerName="nova-scheduler-scheduler" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.742953 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e788d9-865f-453f-bdca-1de3b96af3e7" containerName="nova-scheduler-scheduler" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.742965 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-metadata" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.742973 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-log" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.743676 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.752427 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.752565 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.752633 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.760699 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.786940 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.819623 4932 scope.go:117] "RemoveContainer" containerID="e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.829484 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.831454 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.833747 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.833805 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.842785 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.858820 4932 scope.go:117] "RemoveContainer" containerID="9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3" Feb 18 19:56:49 crc kubenswrapper[4932]: E0218 19:56:49.860310 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3\": container with ID starting with 9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3 not found: ID does not exist" containerID="9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.860387 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3"} err="failed to get container status \"9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3\": rpc error: code = NotFound desc = could not find container \"9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3\": container with ID starting with 9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3 not found: ID does not exist" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.860423 4932 scope.go:117] "RemoveContainer" containerID="e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9" Feb 18 19:56:49 crc kubenswrapper[4932]: E0218 19:56:49.861924 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9\": container with ID starting with e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9 not found: ID does not exist" containerID="e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.861947 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9"} err="failed to get container status \"e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9\": rpc error: code = NotFound desc = could not find container \"e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9\": container with ID starting with e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9 not found: ID does not exist" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.914736 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1c40f97-715a-4ff5-a0f3-1c31cb982552-logs\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.914838 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.915007 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b81937-96f5-42e2-b937-ab11c79ff3d0-config-data\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.915070 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2wzp\" (UniqueName: \"kubernetes.io/projected/f5b81937-96f5-42e2-b937-ab11c79ff3d0-kube-api-access-g2wzp\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.915295 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.915437 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-config-data\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.915500 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b81937-96f5-42e2-b937-ab11c79ff3d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.915588 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2gvg\" (UniqueName: \"kubernetes.io/projected/e1c40f97-715a-4ff5-a0f3-1c31cb982552-kube-api-access-z2gvg\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: E0218 19:56:49.939212 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf60cae27_f16b_4874_800f_f94fc2ce849f.slice/crio-6cc5ccb6f5aea2a8c8a0f89d07f1d4998a02b0b51bdac9b6c2ee10d527679c5a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf60cae27_f16b_4874_800f_f94fc2ce849f.slice\": RecentStats: unable to find data in memory cache]" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017480 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017580 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b81937-96f5-42e2-b937-ab11c79ff3d0-config-data\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017605 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2wzp\" (UniqueName: \"kubernetes.io/projected/f5b81937-96f5-42e2-b937-ab11c79ff3d0-kube-api-access-g2wzp\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017650 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017678 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-config-data\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017698 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b81937-96f5-42e2-b937-ab11c79ff3d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017720 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2gvg\" (UniqueName: \"kubernetes.io/projected/e1c40f97-715a-4ff5-a0f3-1c31cb982552-kube-api-access-z2gvg\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017753 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1c40f97-715a-4ff5-a0f3-1c31cb982552-logs\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.018201 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1c40f97-715a-4ff5-a0f3-1c31cb982552-logs\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.021724 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b81937-96f5-42e2-b937-ab11c79ff3d0-config-data\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.021890 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-config-data\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.023352 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.025226 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b81937-96f5-42e2-b937-ab11c79ff3d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.027493 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.044194 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2gvg\" (UniqueName: \"kubernetes.io/projected/e1c40f97-715a-4ff5-a0f3-1c31cb982552-kube-api-access-z2gvg\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.044359 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2wzp\" (UniqueName: \"kubernetes.io/projected/f5b81937-96f5-42e2-b937-ab11c79ff3d0-kube-api-access-g2wzp\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.097102 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.151949 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.592515 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.686490 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:50 crc kubenswrapper[4932]: W0218 19:56:50.691437 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1c40f97_715a_4ff5_a0f3_1c31cb982552.slice/crio-7e215cfef40e726d4ff650e0ce5408e506726966fe01fac3aec54fc5c083ab80 WatchSource:0}: Error finding container 7e215cfef40e726d4ff650e0ce5408e506726966fe01fac3aec54fc5c083ab80: Status 404 returned error can't find the container with id 7e215cfef40e726d4ff650e0ce5408e506726966fe01fac3aec54fc5c083ab80 Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.706509 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99f0bb69-5596-4997-b53f-9ceb9aa7cac1","Type":"ContainerStarted","Data":"ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239"} Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.706552 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99f0bb69-5596-4997-b53f-9ceb9aa7cac1","Type":"ContainerStarted","Data":"cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f"} Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.708821 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1c40f97-715a-4ff5-a0f3-1c31cb982552","Type":"ContainerStarted","Data":"7e215cfef40e726d4ff650e0ce5408e506726966fe01fac3aec54fc5c083ab80"} Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.710545 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f5b81937-96f5-42e2-b937-ab11c79ff3d0","Type":"ContainerStarted","Data":"c802a8f843349e7a91e1c54efa3ea6e2da76e22c0517d4146e5ed79e8aa95cf9"} Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.728021 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.7280037249999998 podStartE2EDuration="2.728003725s" podCreationTimestamp="2026-02-18 19:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:50.72300724 +0000 UTC m=+1374.304962105" watchObservedRunningTime="2026-02-18 19:56:50.728003725 +0000 UTC m=+1374.309958570" Feb 18 19:56:51 crc kubenswrapper[4932]: I0218 19:56:51.193906 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e788d9-865f-453f-bdca-1de3b96af3e7" path="/var/lib/kubelet/pods/73e788d9-865f-453f-bdca-1de3b96af3e7/volumes" Feb 18 19:56:51 crc kubenswrapper[4932]: I0218 19:56:51.194631 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" path="/var/lib/kubelet/pods/f60cae27-f16b-4874-800f-f94fc2ce849f/volumes" Feb 18 19:56:51 crc kubenswrapper[4932]: I0218 19:56:51.723445 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f5b81937-96f5-42e2-b937-ab11c79ff3d0","Type":"ContainerStarted","Data":"09f93a301ca4f462a1ffbd48b85ef5252a8763eadc40b8eba013b0e6730682c9"} Feb 18 19:56:51 crc kubenswrapper[4932]: I0218 19:56:51.729315 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1c40f97-715a-4ff5-a0f3-1c31cb982552","Type":"ContainerStarted","Data":"c6d7764577ef61c1aacdb6be7f0f7af75476eca669f0bc547da31d8a799aa0e0"} Feb 18 19:56:51 crc kubenswrapper[4932]: I0218 19:56:51.729366 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1c40f97-715a-4ff5-a0f3-1c31cb982552","Type":"ContainerStarted","Data":"a64493fc3a17893cd6129b3f06f37dd6a0a8a196133064d219d4f9b4be075060"} Feb 18 19:56:51 crc kubenswrapper[4932]: I0218 19:56:51.744582 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.744560184 podStartE2EDuration="2.744560184s" podCreationTimestamp="2026-02-18 19:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:51.74035769 +0000 UTC m=+1375.322312535" watchObservedRunningTime="2026-02-18 19:56:51.744560184 +0000 UTC m=+1375.326515029" Feb 18 19:56:51 crc kubenswrapper[4932]: I0218 19:56:51.766069 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.76604867 podStartE2EDuration="2.76604867s" podCreationTimestamp="2026-02-18 19:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:51.763556788 +0000 UTC m=+1375.345511633" watchObservedRunningTime="2026-02-18 19:56:51.76604867 +0000 UTC m=+1375.348003515" Feb 18 19:56:55 crc kubenswrapper[4932]: I0218 19:56:55.098305 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:56:55 crc kubenswrapper[4932]: I0218 19:56:55.152057 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:56:55 crc kubenswrapper[4932]: I0218 19:56:55.152107 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:56:59 crc kubenswrapper[4932]: I0218 19:56:59.139672 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:56:59 crc kubenswrapper[4932]: I0218 19:56:59.140305 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:57:00 crc kubenswrapper[4932]: I0218 19:57:00.101449 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:57:00 crc kubenswrapper[4932]: I0218 19:57:00.143589 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:57:00 crc kubenswrapper[4932]: I0218 19:57:00.152192 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:57:00 crc kubenswrapper[4932]: I0218 19:57:00.152385 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:57:00 crc kubenswrapper[4932]: I0218 19:57:00.222475 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:57:00 crc kubenswrapper[4932]: I0218 19:57:00.222524 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:57:00 crc kubenswrapper[4932]: I0218 19:57:00.896932 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:57:01 crc kubenswrapper[4932]: I0218 19:57:01.164331 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e1c40f97-715a-4ff5-a0f3-1c31cb982552" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:57:01 crc kubenswrapper[4932]: I0218 19:57:01.164363 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e1c40f97-715a-4ff5-a0f3-1c31cb982552" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.145533 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.146136 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.146442 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.146464 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.154481 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.155441 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.392619 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c95b7c697-ptvr7"] Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.394526 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.420646 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c95b7c697-ptvr7"] Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.543708 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-svc\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.543787 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-swift-storage-0\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.543821 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-config\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.543903 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-nb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.543931 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-sb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.543995 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxnb\" (UniqueName: \"kubernetes.io/projected/f91611fc-84cb-4a52-8943-b4a5c7481f45-kube-api-access-vbxnb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.645607 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-nb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.645705 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-sb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.645810 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxnb\" (UniqueName: \"kubernetes.io/projected/f91611fc-84cb-4a52-8943-b4a5c7481f45-kube-api-access-vbxnb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.645843 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-svc\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.645898 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-swift-storage-0\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.645934 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-config\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.646818 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-nb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.647113 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-config\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.647373 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-sb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.647437 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-swift-storage-0\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.647950 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-svc\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.676341 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxnb\" (UniqueName: \"kubernetes.io/projected/f91611fc-84cb-4a52-8943-b4a5c7481f45-kube-api-access-vbxnb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.719110 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.192585 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.238579 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.243525 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.275243 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c95b7c697-ptvr7"] Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.946249 4932 generic.go:334] "Generic (PLEG): container finished" podID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerID="9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e" exitCode=0 Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.947718 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" event={"ID":"f91611fc-84cb-4a52-8943-b4a5c7481f45","Type":"ContainerDied","Data":"9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e"} Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.947754 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" event={"ID":"f91611fc-84cb-4a52-8943-b4a5c7481f45","Type":"ContainerStarted","Data":"655d5fb141738aad0155e62442b9035066c7a9ec2985b3b96a40dbf2d8892c36"} Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.966933 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:57:11 crc kubenswrapper[4932]: I0218 19:57:11.965808 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" event={"ID":"f91611fc-84cb-4a52-8943-b4a5c7481f45","Type":"ContainerStarted","Data":"6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632"} Feb 18 19:57:11 crc kubenswrapper[4932]: I0218 19:57:11.966140 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.026995 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" podStartSLOduration=3.026971217 podStartE2EDuration="3.026971217s" podCreationTimestamp="2026-02-18 19:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:12.006317672 +0000 UTC m=+1395.588272517" watchObservedRunningTime="2026-02-18 19:57:12.026971217 +0000 UTC m=+1395.608926052" Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.218751 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.219045 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-central-agent" containerID="cri-o://d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea" gracePeriod=30 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.219116 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="proxy-httpd" containerID="cri-o://c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39" gracePeriod=30 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.219186 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-notification-agent" containerID="cri-o://beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504" gracePeriod=30 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.219151 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="sg-core" containerID="cri-o://89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c" gracePeriod=30 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.273261 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.273508 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-log" containerID="cri-o://cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f" gracePeriod=30 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.273850 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-api" containerID="cri-o://ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239" gracePeriod=30 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.974715 4932 generic.go:334] "Generic (PLEG): container finished" podID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerID="cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f" exitCode=143 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.974790 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99f0bb69-5596-4997-b53f-9ceb9aa7cac1","Type":"ContainerDied","Data":"cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f"} Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.976902 4932 generic.go:334] "Generic (PLEG): container finished" podID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerID="c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39" exitCode=0 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.976973 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerDied","Data":"c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39"} Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.977010 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerDied","Data":"89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c"} Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.976980 4932 generic.go:334] "Generic (PLEG): container finished" podID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerID="89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c" exitCode=2 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.977028 4932 generic.go:334] "Generic (PLEG): container finished" podID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerID="d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea" exitCode=0 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.977143 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerDied","Data":"d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea"} Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.461777 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.561681 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqmmj\" (UniqueName: \"kubernetes.io/projected/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-kube-api-access-xqmmj\") pod \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.561737 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-combined-ca-bundle\") pod \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.561845 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-config-data\") pod \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.561959 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-logs\") pod \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.562868 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-logs" (OuterVolumeSpecName: "logs") pod "99f0bb69-5596-4997-b53f-9ceb9aa7cac1" (UID: "99f0bb69-5596-4997-b53f-9ceb9aa7cac1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.571295 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-kube-api-access-xqmmj" (OuterVolumeSpecName: "kube-api-access-xqmmj") pod "99f0bb69-5596-4997-b53f-9ceb9aa7cac1" (UID: "99f0bb69-5596-4997-b53f-9ceb9aa7cac1"). InnerVolumeSpecName "kube-api-access-xqmmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.604801 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-config-data" (OuterVolumeSpecName: "config-data") pod "99f0bb69-5596-4997-b53f-9ceb9aa7cac1" (UID: "99f0bb69-5596-4997-b53f-9ceb9aa7cac1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.608830 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99f0bb69-5596-4997-b53f-9ceb9aa7cac1" (UID: "99f0bb69-5596-4997-b53f-9ceb9aa7cac1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.664463 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqmmj\" (UniqueName: \"kubernetes.io/projected/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-kube-api-access-xqmmj\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.664519 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.664531 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.664548 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.997847 4932 generic.go:334] "Generic (PLEG): container finished" podID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerID="ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239" exitCode=0 Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.997924 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99f0bb69-5596-4997-b53f-9ceb9aa7cac1","Type":"ContainerDied","Data":"ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239"} Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.998263 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99f0bb69-5596-4997-b53f-9ceb9aa7cac1","Type":"ContainerDied","Data":"d44a0859e6bc1ca146456cd319c226c1c97e6918ba7cf2e5b3fea2ceb5f507ac"} Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.997952 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.998330 4932 scope.go:117] "RemoveContainer" containerID="ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.040439 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.047939 4932 scope.go:117] "RemoveContainer" containerID="cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.053797 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.070721 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:57:15 crc kubenswrapper[4932]: E0218 19:57:15.071254 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-log" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.071276 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-log" Feb 18 19:57:15 crc kubenswrapper[4932]: E0218 19:57:15.071299 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-api" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.071308 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-api" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.071587 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-log" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.071622 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-api" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.072932 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.075317 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.075527 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.076028 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.080549 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.088696 4932 scope.go:117] "RemoveContainer" containerID="ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239" Feb 18 19:57:15 crc kubenswrapper[4932]: E0218 19:57:15.091278 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239\": container with ID starting with ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239 not found: ID does not exist" containerID="ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.091326 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239"} err="failed to get container status \"ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239\": rpc error: code = NotFound desc = could not find container \"ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239\": container with ID starting with ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239 not found: ID does not exist" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.091353 4932 scope.go:117] "RemoveContainer" containerID="cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f" Feb 18 19:57:15 crc kubenswrapper[4932]: E0218 19:57:15.091669 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f\": container with ID starting with cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f not found: ID does not exist" containerID="cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.091698 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f"} err="failed to get container status \"cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f\": rpc error: code = NotFound desc = could not find container \"cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f\": container with ID starting with cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f not found: ID does not exist" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.174467 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd6q2\" (UniqueName: \"kubernetes.io/projected/d6624ec2-3d16-4050-a368-9f196157bbf5-kube-api-access-hd6q2\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.174778 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.174878 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6624ec2-3d16-4050-a368-9f196157bbf5-logs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.174950 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-public-tls-certs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.175047 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-config-data\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.175089 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.190771 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" path="/var/lib/kubelet/pods/99f0bb69-5596-4997-b53f-9ceb9aa7cac1/volumes" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.276605 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-config-data\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.276666 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.276744 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd6q2\" (UniqueName: \"kubernetes.io/projected/d6624ec2-3d16-4050-a368-9f196157bbf5-kube-api-access-hd6q2\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.276776 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.276831 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6624ec2-3d16-4050-a368-9f196157bbf5-logs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.276860 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-public-tls-certs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.277946 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6624ec2-3d16-4050-a368-9f196157bbf5-logs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.281968 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-public-tls-certs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.282009 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.282057 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.282641 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-config-data\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.305489 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd6q2\" (UniqueName: \"kubernetes.io/projected/d6624ec2-3d16-4050-a368-9f196157bbf5-kube-api-access-hd6q2\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.444305 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.902130 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:57:15 crc kubenswrapper[4932]: W0218 19:57:15.914325 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6624ec2_3d16_4050_a368_9f196157bbf5.slice/crio-4bd44fac229ff96f7fa970cf5ab9351e825e31498c1ca0dce65e69bc684dfad0 WatchSource:0}: Error finding container 4bd44fac229ff96f7fa970cf5ab9351e825e31498c1ca0dce65e69bc684dfad0: Status 404 returned error can't find the container with id 4bd44fac229ff96f7fa970cf5ab9351e825e31498c1ca0dce65e69bc684dfad0 Feb 18 19:57:16 crc kubenswrapper[4932]: I0218 19:57:16.010925 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d6624ec2-3d16-4050-a368-9f196157bbf5","Type":"ContainerStarted","Data":"4bd44fac229ff96f7fa970cf5ab9351e825e31498c1ca0dce65e69bc684dfad0"} Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.036637 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d6624ec2-3d16-4050-a368-9f196157bbf5","Type":"ContainerStarted","Data":"db638daddb0b23c5095b633c55576a9da0f22ae163669278c1f1885dd8cfeaa9"} Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.037012 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d6624ec2-3d16-4050-a368-9f196157bbf5","Type":"ContainerStarted","Data":"32fd820869df0e1bb14f5e26cb049c1cb9e0cf510951376c7aa1af5f51cfb5b9"} Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.065385 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.065361004 podStartE2EDuration="2.065361004s" podCreationTimestamp="2026-02-18 19:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:17.0560063 +0000 UTC m=+1400.637961155" watchObservedRunningTime="2026-02-18 19:57:17.065361004 +0000 UTC m=+1400.647315859" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.486281 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528538 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-run-httpd\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528609 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-scripts\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528653 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-log-httpd\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528696 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-config-data\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528721 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bvgl\" (UniqueName: \"kubernetes.io/projected/07d7be76-f5d6-4280-8009-01c1db25ee6e-kube-api-access-2bvgl\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528745 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-ceilometer-tls-certs\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528764 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-sg-core-conf-yaml\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528800 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-combined-ca-bundle\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.537780 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.538094 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.546719 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-scripts" (OuterVolumeSpecName: "scripts") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.554397 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07d7be76-f5d6-4280-8009-01c1db25ee6e-kube-api-access-2bvgl" (OuterVolumeSpecName: "kube-api-access-2bvgl") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "kube-api-access-2bvgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.620557 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.626247 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.630999 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.631046 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bvgl\" (UniqueName: \"kubernetes.io/projected/07d7be76-f5d6-4280-8009-01c1db25ee6e-kube-api-access-2bvgl\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.631061 4932 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.631076 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.631087 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.631102 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.651160 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.668291 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-config-data" (OuterVolumeSpecName: "config-data") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.733387 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.733418 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.054574 4932 generic.go:334] "Generic (PLEG): container finished" podID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerID="beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504" exitCode=0 Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.054705 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.055439 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerDied","Data":"beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504"} Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.055500 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerDied","Data":"bed094a29d5cc735d8b58329a9d581210c267db550c3be7eeb9923193dc084eb"} Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.055522 4932 scope.go:117] "RemoveContainer" containerID="c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.093851 4932 scope.go:117] "RemoveContainer" containerID="89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.113074 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.136330 4932 scope.go:117] "RemoveContainer" containerID="beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.139270 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.150885 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.151733 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="sg-core" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.151759 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="sg-core" Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.151805 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-central-agent" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.151815 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-central-agent" Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.151829 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-notification-agent" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.151836 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-notification-agent" Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.151869 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="proxy-httpd" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.151878 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="proxy-httpd" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.152119 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="proxy-httpd" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.152153 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-notification-agent" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.152201 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="sg-core" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.152215 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-central-agent" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.156541 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.159157 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.159813 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.160068 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.167404 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.180287 4932 scope.go:117] "RemoveContainer" containerID="d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.205151 4932 scope.go:117] "RemoveContainer" containerID="c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39" Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.205549 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39\": container with ID starting with c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39 not found: ID does not exist" containerID="c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.205579 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39"} err="failed to get container status \"c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39\": rpc error: code = NotFound desc = could not find container \"c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39\": container with ID starting with c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39 not found: ID does not exist" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.205604 4932 scope.go:117] "RemoveContainer" containerID="89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c" Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.205944 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c\": container with ID starting with 89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c not found: ID does not exist" containerID="89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.205998 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c"} err="failed to get container status \"89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c\": rpc error: code = NotFound desc = could not find container \"89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c\": container with ID starting with 89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c not found: ID does not exist" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.206058 4932 scope.go:117] "RemoveContainer" containerID="beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504" Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.207448 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504\": container with ID starting with beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504 not found: ID does not exist" containerID="beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.207477 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504"} err="failed to get container status \"beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504\": rpc error: code = NotFound desc = could not find container \"beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504\": container with ID starting with beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504 not found: ID does not exist" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.207495 4932 scope.go:117] "RemoveContainer" containerID="d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea" Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.207794 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea\": container with ID starting with d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea not found: ID does not exist" containerID="d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.207841 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea"} err="failed to get container status \"d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea\": rpc error: code = NotFound desc = could not find container \"d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea\": container with ID starting with d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea not found: ID does not exist" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.243782 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-log-httpd\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.243899 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsc42\" (UniqueName: \"kubernetes.io/projected/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-kube-api-access-vsc42\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.243956 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-scripts\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.244024 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.244055 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.244115 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-run-httpd\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.244155 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.244198 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-config-data\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346144 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-scripts\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346228 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346266 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346328 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-run-httpd\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346379 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346403 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-config-data\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346452 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-log-httpd\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346529 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsc42\" (UniqueName: \"kubernetes.io/projected/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-kube-api-access-vsc42\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.348343 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-log-httpd\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.348506 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-run-httpd\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.352543 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-config-data\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.352894 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.353074 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.353884 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.354966 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-scripts\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.369369 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsc42\" (UniqueName: \"kubernetes.io/projected/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-kube-api-access-vsc42\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.482658 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.936794 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.073659 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e3d40ff-e417-475c-88e8-ea5adf1f40e6","Type":"ContainerStarted","Data":"07de8e561d3e4d0d3f051288dd8b7aafc54f8e71086628186d3b560f6ecebdef"} Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.192001 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" path="/var/lib/kubelet/pods/07d7be76-f5d6-4280-8009-01c1db25ee6e/volumes" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.472949 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nvplf"] Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.475019 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.499335 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nvplf"] Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.571666 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-utilities\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.571923 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-catalog-content\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.572077 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npzvf\" (UniqueName: \"kubernetes.io/projected/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-kube-api-access-npzvf\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.673920 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npzvf\" (UniqueName: \"kubernetes.io/projected/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-kube-api-access-npzvf\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.674001 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-utilities\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.674024 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-catalog-content\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.674529 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-utilities\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.674598 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-catalog-content\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.699101 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npzvf\" (UniqueName: \"kubernetes.io/projected/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-kube-api-access-npzvf\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.720357 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.797382 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-87f66f8bf-sszng"] Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.797648 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerName="dnsmasq-dns" containerID="cri-o://3c2dd4d6051f054d8ec462a813d59a2da849d9297e15a4c7e5cbe0de8d6eca93" gracePeriod=10 Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.810343 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.116554 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e3d40ff-e417-475c-88e8-ea5adf1f40e6","Type":"ContainerStarted","Data":"f0812891d869584a4693f01a79121c669aa1fd2d2a4417194a4f35894b947583"} Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.116853 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e3d40ff-e417-475c-88e8-ea5adf1f40e6","Type":"ContainerStarted","Data":"b2056538af318016b3a43ddb182a1dc99ac70f398cb30d801529059a8962c269"} Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.125485 4932 generic.go:334] "Generic (PLEG): container finished" podID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerID="3c2dd4d6051f054d8ec462a813d59a2da849d9297e15a4c7e5cbe0de8d6eca93" exitCode=0 Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.125530 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" event={"ID":"c89ff872-244d-428a-a29c-3b9adeae5c0c","Type":"ContainerDied","Data":"3c2dd4d6051f054d8ec462a813d59a2da849d9297e15a4c7e5cbe0de8d6eca93"} Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.403627 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nvplf"] Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.699485 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.901811 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-sb\") pod \"c89ff872-244d-428a-a29c-3b9adeae5c0c\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.902095 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-svc\") pod \"c89ff872-244d-428a-a29c-3b9adeae5c0c\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.902299 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-swift-storage-0\") pod \"c89ff872-244d-428a-a29c-3b9adeae5c0c\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.902318 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-nb\") pod \"c89ff872-244d-428a-a29c-3b9adeae5c0c\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.902383 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjq9z\" (UniqueName: \"kubernetes.io/projected/c89ff872-244d-428a-a29c-3b9adeae5c0c-kube-api-access-cjq9z\") pod \"c89ff872-244d-428a-a29c-3b9adeae5c0c\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.902399 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-config\") pod \"c89ff872-244d-428a-a29c-3b9adeae5c0c\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.919529 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89ff872-244d-428a-a29c-3b9adeae5c0c-kube-api-access-cjq9z" (OuterVolumeSpecName: "kube-api-access-cjq9z") pod "c89ff872-244d-428a-a29c-3b9adeae5c0c" (UID: "c89ff872-244d-428a-a29c-3b9adeae5c0c"). InnerVolumeSpecName "kube-api-access-cjq9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.971685 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c89ff872-244d-428a-a29c-3b9adeae5c0c" (UID: "c89ff872-244d-428a-a29c-3b9adeae5c0c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.992979 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c89ff872-244d-428a-a29c-3b9adeae5c0c" (UID: "c89ff872-244d-428a-a29c-3b9adeae5c0c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.995624 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-config" (OuterVolumeSpecName: "config") pod "c89ff872-244d-428a-a29c-3b9adeae5c0c" (UID: "c89ff872-244d-428a-a29c-3b9adeae5c0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.999869 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c89ff872-244d-428a-a29c-3b9adeae5c0c" (UID: "c89ff872-244d-428a-a29c-3b9adeae5c0c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.005750 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.005820 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.005832 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjq9z\" (UniqueName: \"kubernetes.io/projected/c89ff872-244d-428a-a29c-3b9adeae5c0c-kube-api-access-cjq9z\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.005846 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.005855 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.016646 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c89ff872-244d-428a-a29c-3b9adeae5c0c" (UID: "c89ff872-244d-428a-a29c-3b9adeae5c0c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.108073 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.135537 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e3d40ff-e417-475c-88e8-ea5adf1f40e6","Type":"ContainerStarted","Data":"9dc865bc159b7f3f1d0586bafba309d1af9f0c9a279fb487f92307d5be96c487"} Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.137450 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" event={"ID":"c89ff872-244d-428a-a29c-3b9adeae5c0c","Type":"ContainerDied","Data":"3a5bcecade0b5dff94560cc8f3a4637b00cd9cdde3e3372019fd257bdc54822e"} Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.137471 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.137508 4932 scope.go:117] "RemoveContainer" containerID="3c2dd4d6051f054d8ec462a813d59a2da849d9297e15a4c7e5cbe0de8d6eca93" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.138968 4932 generic.go:334] "Generic (PLEG): container finished" podID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerID="6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a" exitCode=0 Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.139014 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvplf" event={"ID":"cdcdbe71-7ce4-4038-b13a-345f14b7a80d","Type":"ContainerDied","Data":"6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a"} Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.139038 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvplf" event={"ID":"cdcdbe71-7ce4-4038-b13a-345f14b7a80d","Type":"ContainerStarted","Data":"bd6d1dac6bf3ebca465127b4e668733d6b3eab206b93e857a3ffc9cc951ff030"} Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.182445 4932 scope.go:117] "RemoveContainer" containerID="efa1de95f92b6f71ab718eba81f5146f37d50f46643463b88203e329ebaceb9a" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.256142 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-87f66f8bf-sszng"] Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.277516 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-87f66f8bf-sszng"] Feb 18 19:57:22 crc kubenswrapper[4932]: I0218 19:57:22.153899 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvplf" event={"ID":"cdcdbe71-7ce4-4038-b13a-345f14b7a80d","Type":"ContainerStarted","Data":"023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8"} Feb 18 19:57:23 crc kubenswrapper[4932]: I0218 19:57:23.168966 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e3d40ff-e417-475c-88e8-ea5adf1f40e6","Type":"ContainerStarted","Data":"261c3f31d70b14a90e3ad6ce964cc3aa48b7447063b6b0bd154dc91874bb7d0e"} Feb 18 19:57:23 crc kubenswrapper[4932]: I0218 19:57:23.169362 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:57:23 crc kubenswrapper[4932]: I0218 19:57:23.200268 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.053498656 podStartE2EDuration="5.200250005s" podCreationTimestamp="2026-02-18 19:57:18 +0000 UTC" firstStartedPulling="2026-02-18 19:57:18.943977658 +0000 UTC m=+1402.525932503" lastFinishedPulling="2026-02-18 19:57:22.090728997 +0000 UTC m=+1405.672683852" observedRunningTime="2026-02-18 19:57:23.190750988 +0000 UTC m=+1406.772705833" watchObservedRunningTime="2026-02-18 19:57:23.200250005 +0000 UTC m=+1406.782204850" Feb 18 19:57:23 crc kubenswrapper[4932]: I0218 19:57:23.202560 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" path="/var/lib/kubelet/pods/c89ff872-244d-428a-a29c-3b9adeae5c0c/volumes" Feb 18 19:57:25 crc kubenswrapper[4932]: I0218 19:57:25.191915 4932 generic.go:334] "Generic (PLEG): container finished" podID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerID="023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8" exitCode=0 Feb 18 19:57:25 crc kubenswrapper[4932]: I0218 19:57:25.193438 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvplf" event={"ID":"cdcdbe71-7ce4-4038-b13a-345f14b7a80d","Type":"ContainerDied","Data":"023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8"} Feb 18 19:57:25 crc kubenswrapper[4932]: I0218 19:57:25.428250 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.212:5353: i/o timeout" Feb 18 19:57:25 crc kubenswrapper[4932]: I0218 19:57:25.444499 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:57:25 crc kubenswrapper[4932]: I0218 19:57:25.444550 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:57:26 crc kubenswrapper[4932]: I0218 19:57:26.204015 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvplf" event={"ID":"cdcdbe71-7ce4-4038-b13a-345f14b7a80d","Type":"ContainerStarted","Data":"09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a"} Feb 18 19:57:26 crc kubenswrapper[4932]: I0218 19:57:26.230202 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nvplf" podStartSLOduration=2.772706014 podStartE2EDuration="7.230156391s" podCreationTimestamp="2026-02-18 19:57:19 +0000 UTC" firstStartedPulling="2026-02-18 19:57:21.142988914 +0000 UTC m=+1404.724943759" lastFinishedPulling="2026-02-18 19:57:25.600439251 +0000 UTC m=+1409.182394136" observedRunningTime="2026-02-18 19:57:26.227500875 +0000 UTC m=+1409.809455720" watchObservedRunningTime="2026-02-18 19:57:26.230156391 +0000 UTC m=+1409.812111236" Feb 18 19:57:26 crc kubenswrapper[4932]: I0218 19:57:26.464464 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d6624ec2-3d16-4050-a368-9f196157bbf5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.227:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:57:26 crc kubenswrapper[4932]: I0218 19:57:26.464479 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d6624ec2-3d16-4050-a368-9f196157bbf5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.227:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.791560 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wkrfs"] Feb 18 19:57:28 crc kubenswrapper[4932]: E0218 19:57:28.792314 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerName="init" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.792327 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerName="init" Feb 18 19:57:28 crc kubenswrapper[4932]: E0218 19:57:28.792356 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerName="dnsmasq-dns" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.792362 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerName="dnsmasq-dns" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.792531 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerName="dnsmasq-dns" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.794260 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.808459 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wkrfs"] Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.862281 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tks82\" (UniqueName: \"kubernetes.io/projected/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-kube-api-access-tks82\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.862388 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-catalog-content\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.862659 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-utilities\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.964866 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-utilities\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.965155 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tks82\" (UniqueName: \"kubernetes.io/projected/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-kube-api-access-tks82\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.965316 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-catalog-content\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.965337 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-utilities\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.965787 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-catalog-content\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.984379 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tks82\" (UniqueName: \"kubernetes.io/projected/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-kube-api-access-tks82\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:29 crc kubenswrapper[4932]: I0218 19:57:29.126799 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:29 crc kubenswrapper[4932]: I0218 19:57:29.782303 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wkrfs"] Feb 18 19:57:29 crc kubenswrapper[4932]: I0218 19:57:29.810563 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:29 crc kubenswrapper[4932]: I0218 19:57:29.810607 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:30 crc kubenswrapper[4932]: I0218 19:57:30.249459 4932 generic.go:334] "Generic (PLEG): container finished" podID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerID="ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285" exitCode=0 Feb 18 19:57:30 crc kubenswrapper[4932]: I0218 19:57:30.249522 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkrfs" event={"ID":"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a","Type":"ContainerDied","Data":"ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285"} Feb 18 19:57:30 crc kubenswrapper[4932]: I0218 19:57:30.249789 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkrfs" event={"ID":"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a","Type":"ContainerStarted","Data":"78e974bf0eef1be4d5c43dcef00fe5ff25f566422b51bca7b7994c8bd4a0c17b"} Feb 18 19:57:30 crc kubenswrapper[4932]: I0218 19:57:30.861257 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nvplf" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" probeResult="failure" output=< Feb 18 19:57:30 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 19:57:30 crc kubenswrapper[4932]: > Feb 18 19:57:32 crc kubenswrapper[4932]: I0218 19:57:32.274868 4932 generic.go:334] "Generic (PLEG): container finished" podID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerID="9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d" exitCode=0 Feb 18 19:57:32 crc kubenswrapper[4932]: I0218 19:57:32.274929 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkrfs" event={"ID":"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a","Type":"ContainerDied","Data":"9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d"} Feb 18 19:57:33 crc kubenswrapper[4932]: I0218 19:57:33.290043 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkrfs" event={"ID":"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a","Type":"ContainerStarted","Data":"82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708"} Feb 18 19:57:33 crc kubenswrapper[4932]: I0218 19:57:33.313384 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wkrfs" podStartSLOduration=2.628125552 podStartE2EDuration="5.313362018s" podCreationTimestamp="2026-02-18 19:57:28 +0000 UTC" firstStartedPulling="2026-02-18 19:57:30.251407874 +0000 UTC m=+1413.833362719" lastFinishedPulling="2026-02-18 19:57:32.93664434 +0000 UTC m=+1416.518599185" observedRunningTime="2026-02-18 19:57:33.309707856 +0000 UTC m=+1416.891662711" watchObservedRunningTime="2026-02-18 19:57:33.313362018 +0000 UTC m=+1416.895316863" Feb 18 19:57:35 crc kubenswrapper[4932]: I0218 19:57:35.454751 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:57:35 crc kubenswrapper[4932]: I0218 19:57:35.455430 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:57:35 crc kubenswrapper[4932]: I0218 19:57:35.459250 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:57:35 crc kubenswrapper[4932]: I0218 19:57:35.463983 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:57:36 crc kubenswrapper[4932]: I0218 19:57:36.331950 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:57:36 crc kubenswrapper[4932]: I0218 19:57:36.343396 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:57:39 crc kubenswrapper[4932]: I0218 19:57:39.127328 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:39 crc kubenswrapper[4932]: I0218 19:57:39.127676 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:40 crc kubenswrapper[4932]: I0218 19:57:40.180652 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wkrfs" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="registry-server" probeResult="failure" output=< Feb 18 19:57:40 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 19:57:40 crc kubenswrapper[4932]: > Feb 18 19:57:40 crc kubenswrapper[4932]: I0218 19:57:40.853710 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nvplf" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" probeResult="failure" output=< Feb 18 19:57:40 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 19:57:40 crc kubenswrapper[4932]: > Feb 18 19:57:48 crc kubenswrapper[4932]: I0218 19:57:48.500441 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:57:49 crc kubenswrapper[4932]: I0218 19:57:49.174592 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:49 crc kubenswrapper[4932]: I0218 19:57:49.248254 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:49 crc kubenswrapper[4932]: I0218 19:57:49.413064 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wkrfs"] Feb 18 19:57:50 crc kubenswrapper[4932]: I0218 19:57:50.479313 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wkrfs" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="registry-server" containerID="cri-o://82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708" gracePeriod=2 Feb 18 19:57:50 crc kubenswrapper[4932]: I0218 19:57:50.862112 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nvplf" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" probeResult="failure" output=< Feb 18 19:57:50 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 19:57:50 crc kubenswrapper[4932]: > Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.115012 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.175146 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tks82\" (UniqueName: \"kubernetes.io/projected/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-kube-api-access-tks82\") pod \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.175303 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-catalog-content\") pod \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.175388 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-utilities\") pod \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.176224 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-utilities" (OuterVolumeSpecName: "utilities") pod "8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" (UID: "8bacbe3a-bfae-4502-806c-ba2eb1c7b48a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.198323 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-kube-api-access-tks82" (OuterVolumeSpecName: "kube-api-access-tks82") pod "8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" (UID: "8bacbe3a-bfae-4502-806c-ba2eb1c7b48a"). InnerVolumeSpecName "kube-api-access-tks82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.238380 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" (UID: "8bacbe3a-bfae-4502-806c-ba2eb1c7b48a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.277289 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.277325 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.277339 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tks82\" (UniqueName: \"kubernetes.io/projected/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-kube-api-access-tks82\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.490624 4932 generic.go:334] "Generic (PLEG): container finished" podID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerID="82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708" exitCode=0 Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.490694 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.490698 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkrfs" event={"ID":"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a","Type":"ContainerDied","Data":"82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708"} Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.490870 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkrfs" event={"ID":"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a","Type":"ContainerDied","Data":"78e974bf0eef1be4d5c43dcef00fe5ff25f566422b51bca7b7994c8bd4a0c17b"} Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.490933 4932 scope.go:117] "RemoveContainer" containerID="82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.514952 4932 scope.go:117] "RemoveContainer" containerID="9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.535003 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wkrfs"] Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.552844 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wkrfs"] Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.557264 4932 scope.go:117] "RemoveContainer" containerID="ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.595654 4932 scope.go:117] "RemoveContainer" containerID="82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708" Feb 18 19:57:51 crc kubenswrapper[4932]: E0218 19:57:51.596169 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708\": container with ID starting with 82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708 not found: ID does not exist" containerID="82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.596299 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708"} err="failed to get container status \"82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708\": rpc error: code = NotFound desc = could not find container \"82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708\": container with ID starting with 82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708 not found: ID does not exist" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.596334 4932 scope.go:117] "RemoveContainer" containerID="9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d" Feb 18 19:57:51 crc kubenswrapper[4932]: E0218 19:57:51.596839 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d\": container with ID starting with 9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d not found: ID does not exist" containerID="9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.596863 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d"} err="failed to get container status \"9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d\": rpc error: code = NotFound desc = could not find container \"9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d\": container with ID starting with 9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d not found: ID does not exist" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.596876 4932 scope.go:117] "RemoveContainer" containerID="ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285" Feb 18 19:57:51 crc kubenswrapper[4932]: E0218 19:57:51.597671 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285\": container with ID starting with ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285 not found: ID does not exist" containerID="ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.597703 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285"} err="failed to get container status \"ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285\": rpc error: code = NotFound desc = could not find container \"ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285\": container with ID starting with ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285 not found: ID does not exist" Feb 18 19:57:51 crc kubenswrapper[4932]: E0218 19:57:51.747167 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bacbe3a_bfae_4502_806c_ba2eb1c7b48a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bacbe3a_bfae_4502_806c_ba2eb1c7b48a.slice/crio-78e974bf0eef1be4d5c43dcef00fe5ff25f566422b51bca7b7994c8bd4a0c17b\": RecentStats: unable to find data in memory cache]" Feb 18 19:57:53 crc kubenswrapper[4932]: I0218 19:57:53.193416 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" path="/var/lib/kubelet/pods/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a/volumes" Feb 18 19:57:58 crc kubenswrapper[4932]: I0218 19:57:58.053837 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:57:59 crc kubenswrapper[4932]: I0218 19:57:59.010189 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:58:00 crc kubenswrapper[4932]: I0218 19:58:00.865833 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nvplf" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" probeResult="failure" output=< Feb 18 19:58:00 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 19:58:00 crc kubenswrapper[4932]: > Feb 18 19:58:01 crc kubenswrapper[4932]: I0218 19:58:01.702966 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerName="rabbitmq" containerID="cri-o://7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9" gracePeriod=604797 Feb 18 19:58:02 crc kubenswrapper[4932]: I0218 19:58:02.435183 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" containerName="rabbitmq" containerID="cri-o://70c0ba22a4bf84fc3b05812bcef99a157180fd838ac2af05d6ca1de21cd9e980" gracePeriod=604797 Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.267598 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460035 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-plugins\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460105 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-tls\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460214 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-plugins-conf\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460247 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dtlp\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-kube-api-access-2dtlp\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460306 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-config-data\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460323 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460361 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-erlang-cookie\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460410 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-pod-info\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460460 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-server-conf\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460472 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460498 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-confd\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460534 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-erlang-cookie-secret\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460987 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.461907 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.462826 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.467106 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.468519 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-pod-info" (OuterVolumeSpecName: "pod-info") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.469574 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.470636 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-kube-api-access-2dtlp" (OuterVolumeSpecName: "kube-api-access-2dtlp") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "kube-api-access-2dtlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.494627 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.514702 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-config-data" (OuterVolumeSpecName: "config-data") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.525319 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-server-conf" (OuterVolumeSpecName: "server-conf") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562755 4932 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562791 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562801 4932 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562810 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dtlp\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-kube-api-access-2dtlp\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562818 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562841 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562850 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562858 4932 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562865 4932 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.582630 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.627932 4932 generic.go:334] "Generic (PLEG): container finished" podID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerID="7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9" exitCode=0 Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.628035 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.628053 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c","Type":"ContainerDied","Data":"7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9"} Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.628356 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c","Type":"ContainerDied","Data":"080ccaf3edee131274523286f1e1cdf3b8aebb0e277f6e516ffc7e73a0cc72c7"} Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.628401 4932 scope.go:117] "RemoveContainer" containerID="7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.631805 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.632073 4932 generic.go:334] "Generic (PLEG): container finished" podID="cd547864-4d03-45ae-8bb1-10a360d36599" containerID="70c0ba22a4bf84fc3b05812bcef99a157180fd838ac2af05d6ca1de21cd9e980" exitCode=0 Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.632107 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd547864-4d03-45ae-8bb1-10a360d36599","Type":"ContainerDied","Data":"70c0ba22a4bf84fc3b05812bcef99a157180fd838ac2af05d6ca1de21cd9e980"} Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.664967 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.665003 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.690389 4932 scope.go:117] "RemoveContainer" containerID="9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.725000 4932 scope.go:117] "RemoveContainer" containerID="7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9" Feb 18 19:58:03 crc kubenswrapper[4932]: E0218 19:58:03.725578 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9\": container with ID starting with 7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9 not found: ID does not exist" containerID="7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.725636 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9"} err="failed to get container status \"7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9\": rpc error: code = NotFound desc = could not find container \"7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9\": container with ID starting with 7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9 not found: ID does not exist" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.725677 4932 scope.go:117] "RemoveContainer" containerID="9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d" Feb 18 19:58:03 crc kubenswrapper[4932]: E0218 19:58:03.726032 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d\": container with ID starting with 9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d not found: ID does not exist" containerID="9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.726077 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d"} err="failed to get container status \"9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d\": rpc error: code = NotFound desc = could not find container \"9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d\": container with ID starting with 9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d not found: ID does not exist" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.876399 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.971037 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-plugins-conf\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.971185 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd547864-4d03-45ae-8bb1-10a360d36599-erlang-cookie-secret\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.971715 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-erlang-cookie\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.971756 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-tls\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.971798 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.971859 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-server-conf\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.973116 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.973668 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.974049 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-confd\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.974086 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-plugins\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.974123 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd547864-4d03-45ae-8bb1-10a360d36599-pod-info\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.974149 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-config-data\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.974244 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwqrq\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-kube-api-access-fwqrq\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.974855 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.977985 4932 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.978211 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.978229 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.994908 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-kube-api-access-fwqrq" (OuterVolumeSpecName: "kube-api-access-fwqrq") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "kube-api-access-fwqrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.999720 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cd547864-4d03-45ae-8bb1-10a360d36599-pod-info" (OuterVolumeSpecName: "pod-info") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.003550 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.003780 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.014338 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd547864-4d03-45ae-8bb1-10a360d36599-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.028311 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-config-data" (OuterVolumeSpecName: "config-data") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.045505 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.079365 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.087584 4932 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd547864-4d03-45ae-8bb1-10a360d36599-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.087615 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.087648 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.087658 4932 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd547864-4d03-45ae-8bb1-10a360d36599-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.087667 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.087703 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwqrq\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-kube-api-access-fwqrq\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.107324 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-server-conf" (OuterVolumeSpecName: "server-conf") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131107 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: E0218 19:58:04.131508 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerName="rabbitmq" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131527 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerName="rabbitmq" Feb 18 19:58:04 crc kubenswrapper[4932]: E0218 19:58:04.131595 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerName="setup-container" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131603 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerName="setup-container" Feb 18 19:58:04 crc kubenswrapper[4932]: E0218 19:58:04.131614 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" containerName="rabbitmq" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131621 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" containerName="rabbitmq" Feb 18 19:58:04 crc kubenswrapper[4932]: E0218 19:58:04.131630 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="registry-server" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131655 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="registry-server" Feb 18 19:58:04 crc kubenswrapper[4932]: E0218 19:58:04.131666 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="extract-utilities" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131672 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="extract-utilities" Feb 18 19:58:04 crc kubenswrapper[4932]: E0218 19:58:04.131684 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="extract-content" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131690 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="extract-content" Feb 18 19:58:04 crc kubenswrapper[4932]: E0218 19:58:04.131702 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" containerName="setup-container" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131708 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" containerName="setup-container" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131988 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="registry-server" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.132005 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" containerName="rabbitmq" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.132051 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerName="rabbitmq" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.140294 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.144259 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.151050 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ptcgt" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.151291 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.151454 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.152201 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.153125 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.154460 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.160628 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.163161 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.189284 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.189351 4932 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.211035 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291285 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7jld\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-kube-api-access-p7jld\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291343 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-config-data\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291514 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d466e51b-87dc-413f-aeb2-f3566a46eeb5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291552 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291576 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291672 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291708 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291750 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291808 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291919 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.292278 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d466e51b-87dc-413f-aeb2-f3566a46eeb5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.292362 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394526 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394579 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394660 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d466e51b-87dc-413f-aeb2-f3566a46eeb5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394706 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7jld\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-kube-api-access-p7jld\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394734 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-config-data\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394798 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d466e51b-87dc-413f-aeb2-f3566a46eeb5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394861 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394908 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394953 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394984 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.395013 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.395828 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.396749 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.397603 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.398148 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.399029 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-config-data\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.399715 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.402208 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d466e51b-87dc-413f-aeb2-f3566a46eeb5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.403351 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.403589 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d466e51b-87dc-413f-aeb2-f3566a46eeb5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.404674 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.422426 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7jld\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-kube-api-access-p7jld\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.444886 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.496160 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.643548 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd547864-4d03-45ae-8bb1-10a360d36599","Type":"ContainerDied","Data":"df1d9be37e083e5a4584427f91148d70b49af32f754e3fd54a2d761cb7b0f9e2"} Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.643608 4932 scope.go:117] "RemoveContainer" containerID="70c0ba22a4bf84fc3b05812bcef99a157180fd838ac2af05d6ca1de21cd9e980" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.643770 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.674225 4932 scope.go:117] "RemoveContainer" containerID="7410562445bbd85ecddd8f8fa1c64974cd82f5bccf5b814dba01368f2c897a68" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.745119 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.757254 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.786866 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.788524 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.793710 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.793888 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.794041 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-l229h" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.794185 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.794295 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.794401 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.794550 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.817973 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906288 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906396 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906452 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da761aa0-8599-4aee-9078-ecaf2a04f259-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906709 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da761aa0-8599-4aee-9078-ecaf2a04f259-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906794 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906835 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906871 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906971 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57r74\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-kube-api-access-57r74\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.907078 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.907204 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.907249 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008399 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da761aa0-8599-4aee-9078-ecaf2a04f259-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008461 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008490 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008521 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008551 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57r74\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-kube-api-access-57r74\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008575 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008609 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008634 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008655 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008701 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008716 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da761aa0-8599-4aee-9078-ecaf2a04f259-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.009211 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.009681 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.010360 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.010383 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.010468 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.010570 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.016260 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da761aa0-8599-4aee-9078-ecaf2a04f259-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.016986 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.017427 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da761aa0-8599-4aee-9078-ecaf2a04f259-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.017726 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.035998 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57r74\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-kube-api-access-57r74\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.049588 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.054462 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.114000 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.199012 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" path="/var/lib/kubelet/pods/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c/volumes" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.200281 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" path="/var/lib/kubelet/pods/cd547864-4d03-45ae-8bb1-10a360d36599/volumes" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.594632 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:58:05 crc kubenswrapper[4932]: W0218 19:58:05.598589 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda761aa0_8599_4aee_9078_ecaf2a04f259.slice/crio-00d28f70a7a85d9160fc7bed44cc3914e208f3266c739718ce43a791791deb41 WatchSource:0}: Error finding container 00d28f70a7a85d9160fc7bed44cc3914e208f3266c739718ce43a791791deb41: Status 404 returned error can't find the container with id 00d28f70a7a85d9160fc7bed44cc3914e208f3266c739718ce43a791791deb41 Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.664268 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d466e51b-87dc-413f-aeb2-f3566a46eeb5","Type":"ContainerStarted","Data":"c8549d8b3a7b9f36c193177519d51b37e168e6fc798904ac0942bb7314cea96a"} Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.666803 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da761aa0-8599-4aee-9078-ecaf2a04f259","Type":"ContainerStarted","Data":"00d28f70a7a85d9160fc7bed44cc3914e208f3266c739718ce43a791791deb41"} Feb 18 19:58:07 crc kubenswrapper[4932]: I0218 19:58:07.691768 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da761aa0-8599-4aee-9078-ecaf2a04f259","Type":"ContainerStarted","Data":"1b8037c10dd0ae5363e01ddd9be7d861df2dac91332a086e6a5ad5c81c97cf0c"} Feb 18 19:58:07 crc kubenswrapper[4932]: I0218 19:58:07.695272 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d466e51b-87dc-413f-aeb2-f3566a46eeb5","Type":"ContainerStarted","Data":"fdbbac4e492980e8a9121b26a7487b983c016efa01fd41436c950b17336cea34"} Feb 18 19:58:09 crc kubenswrapper[4932]: I0218 19:58:09.914060 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:58:09 crc kubenswrapper[4932]: I0218 19:58:09.963537 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:58:10 crc kubenswrapper[4932]: I0218 19:58:10.155894 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nvplf"] Feb 18 19:58:11 crc kubenswrapper[4932]: I0218 19:58:11.739997 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nvplf" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" containerID="cri-o://09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a" gracePeriod=2 Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.169585 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.279488 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npzvf\" (UniqueName: \"kubernetes.io/projected/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-kube-api-access-npzvf\") pod \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.282612 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-catalog-content\") pod \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.284775 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-utilities\") pod \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.285597 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-utilities" (OuterVolumeSpecName: "utilities") pod "cdcdbe71-7ce4-4038-b13a-345f14b7a80d" (UID: "cdcdbe71-7ce4-4038-b13a-345f14b7a80d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.286133 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.295476 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-kube-api-access-npzvf" (OuterVolumeSpecName: "kube-api-access-npzvf") pod "cdcdbe71-7ce4-4038-b13a-345f14b7a80d" (UID: "cdcdbe71-7ce4-4038-b13a-345f14b7a80d"). InnerVolumeSpecName "kube-api-access-npzvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.388348 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npzvf\" (UniqueName: \"kubernetes.io/projected/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-kube-api-access-npzvf\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.391814 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cdcdbe71-7ce4-4038-b13a-345f14b7a80d" (UID: "cdcdbe71-7ce4-4038-b13a-345f14b7a80d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.490511 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.751442 4932 generic.go:334] "Generic (PLEG): container finished" podID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerID="09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a" exitCode=0 Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.751498 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvplf" event={"ID":"cdcdbe71-7ce4-4038-b13a-345f14b7a80d","Type":"ContainerDied","Data":"09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a"} Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.751538 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvplf" event={"ID":"cdcdbe71-7ce4-4038-b13a-345f14b7a80d","Type":"ContainerDied","Data":"bd6d1dac6bf3ebca465127b4e668733d6b3eab206b93e857a3ffc9cc951ff030"} Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.751577 4932 scope.go:117] "RemoveContainer" containerID="09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.751583 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.799803 4932 scope.go:117] "RemoveContainer" containerID="023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.808042 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nvplf"] Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.816976 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nvplf"] Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.829372 4932 scope.go:117] "RemoveContainer" containerID="6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.872858 4932 scope.go:117] "RemoveContainer" containerID="09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a" Feb 18 19:58:12 crc kubenswrapper[4932]: E0218 19:58:12.873480 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a\": container with ID starting with 09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a not found: ID does not exist" containerID="09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.873537 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a"} err="failed to get container status \"09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a\": rpc error: code = NotFound desc = could not find container \"09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a\": container with ID starting with 09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a not found: ID does not exist" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.873570 4932 scope.go:117] "RemoveContainer" containerID="023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8" Feb 18 19:58:12 crc kubenswrapper[4932]: E0218 19:58:12.874069 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8\": container with ID starting with 023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8 not found: ID does not exist" containerID="023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.874153 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8"} err="failed to get container status \"023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8\": rpc error: code = NotFound desc = could not find container \"023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8\": container with ID starting with 023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8 not found: ID does not exist" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.874230 4932 scope.go:117] "RemoveContainer" containerID="6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a" Feb 18 19:58:12 crc kubenswrapper[4932]: E0218 19:58:12.875587 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a\": container with ID starting with 6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a not found: ID does not exist" containerID="6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.875687 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a"} err="failed to get container status \"6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a\": rpc error: code = NotFound desc = could not find container \"6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a\": container with ID starting with 6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a not found: ID does not exist" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.194026 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" path="/var/lib/kubelet/pods/cdcdbe71-7ce4-4038-b13a-345f14b7a80d/volumes" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.631083 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9ff754475-2bjzt"] Feb 18 19:58:13 crc kubenswrapper[4932]: E0218 19:58:13.631469 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="extract-content" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.631486 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="extract-content" Feb 18 19:58:13 crc kubenswrapper[4932]: E0218 19:58:13.631528 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.631535 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" Feb 18 19:58:13 crc kubenswrapper[4932]: E0218 19:58:13.631548 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="extract-utilities" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.631555 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="extract-utilities" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.631741 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.633106 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.635536 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.647570 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9ff754475-2bjzt"] Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.714755 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-sb\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.714863 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-svc\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.714900 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pxmq\" (UniqueName: \"kubernetes.io/projected/96b80a13-2da6-4c91-a09d-1e935313a13f-kube-api-access-9pxmq\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.714972 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-nb\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.715024 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-openstack-edpm-ipam\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.715063 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-swift-storage-0\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.715113 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-config\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.817717 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-config\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.817846 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-sb\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.817902 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-svc\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.817939 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pxmq\" (UniqueName: \"kubernetes.io/projected/96b80a13-2da6-4c91-a09d-1e935313a13f-kube-api-access-9pxmq\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.817988 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-nb\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.818030 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-openstack-edpm-ipam\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.818089 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-swift-storage-0\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.819965 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-swift-storage-0\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.822114 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-sb\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.822205 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-openstack-edpm-ipam\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.822664 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-nb\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.823149 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-config\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.823385 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-svc\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.849314 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pxmq\" (UniqueName: \"kubernetes.io/projected/96b80a13-2da6-4c91-a09d-1e935313a13f-kube-api-access-9pxmq\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.966041 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:14 crc kubenswrapper[4932]: I0218 19:58:14.467109 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9ff754475-2bjzt"] Feb 18 19:58:14 crc kubenswrapper[4932]: I0218 19:58:14.776424 4932 generic.go:334] "Generic (PLEG): container finished" podID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerID="296162c2e2b6e7e74bfb52a7dd47f5e5692cdadf7b2924591218a8984d84e2df" exitCode=0 Feb 18 19:58:14 crc kubenswrapper[4932]: I0218 19:58:14.776505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" event={"ID":"96b80a13-2da6-4c91-a09d-1e935313a13f","Type":"ContainerDied","Data":"296162c2e2b6e7e74bfb52a7dd47f5e5692cdadf7b2924591218a8984d84e2df"} Feb 18 19:58:14 crc kubenswrapper[4932]: I0218 19:58:14.776844 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" event={"ID":"96b80a13-2da6-4c91-a09d-1e935313a13f","Type":"ContainerStarted","Data":"9e066ae757e52496269799f9e7d2df6157f05d0842da76118feae5596927b07d"} Feb 18 19:58:15 crc kubenswrapper[4932]: I0218 19:58:15.792590 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" event={"ID":"96b80a13-2da6-4c91-a09d-1e935313a13f","Type":"ContainerStarted","Data":"40b4bfc44e288dd7c847df3dd0b2a945f9df48fd611e15411b34cc995f0f85cf"} Feb 18 19:58:15 crc kubenswrapper[4932]: I0218 19:58:15.792955 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:15 crc kubenswrapper[4932]: I0218 19:58:15.828801 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" podStartSLOduration=2.8287835919999997 podStartE2EDuration="2.828783592s" podCreationTimestamp="2026-02-18 19:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:58:15.822837854 +0000 UTC m=+1459.404792729" watchObservedRunningTime="2026-02-18 19:58:15.828783592 +0000 UTC m=+1459.410738437" Feb 18 19:58:23 crc kubenswrapper[4932]: I0218 19:58:23.968355 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.051705 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c95b7c697-ptvr7"] Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.052061 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" podUID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerName="dnsmasq-dns" containerID="cri-o://6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632" gracePeriod=10 Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.250383 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d5b5f5475-czsf7"] Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.269128 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d5b5f5475-czsf7"] Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.269269 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.378373 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.378657 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-dns-swift-storage-0\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.378739 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-dns-svc\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.378936 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt8pq\" (UniqueName: \"kubernetes.io/projected/f60b5155-406c-4c95-9848-2792faba2235-kube-api-access-pt8pq\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.378977 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-config\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.379026 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-ovsdbserver-sb\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.379120 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-ovsdbserver-nb\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.481211 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-ovsdbserver-nb\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.481324 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.481440 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-dns-swift-storage-0\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.481506 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-dns-svc\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.481586 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt8pq\" (UniqueName: \"kubernetes.io/projected/f60b5155-406c-4c95-9848-2792faba2235-kube-api-access-pt8pq\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.481616 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-config\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.481651 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-ovsdbserver-sb\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.482761 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-ovsdbserver-sb\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.483475 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-ovsdbserver-nb\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.483987 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-dns-svc\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.484454 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.484484 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-dns-swift-storage-0\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.485061 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-config\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.504972 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt8pq\" (UniqueName: \"kubernetes.io/projected/f60b5155-406c-4c95-9848-2792faba2235-kube-api-access-pt8pq\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.588486 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.598703 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.685488 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbxnb\" (UniqueName: \"kubernetes.io/projected/f91611fc-84cb-4a52-8943-b4a5c7481f45-kube-api-access-vbxnb\") pod \"f91611fc-84cb-4a52-8943-b4a5c7481f45\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.685542 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-config\") pod \"f91611fc-84cb-4a52-8943-b4a5c7481f45\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.685571 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-svc\") pod \"f91611fc-84cb-4a52-8943-b4a5c7481f45\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.685630 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-nb\") pod \"f91611fc-84cb-4a52-8943-b4a5c7481f45\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.685658 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-swift-storage-0\") pod \"f91611fc-84cb-4a52-8943-b4a5c7481f45\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.685729 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-sb\") pod \"f91611fc-84cb-4a52-8943-b4a5c7481f45\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.689820 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91611fc-84cb-4a52-8943-b4a5c7481f45-kube-api-access-vbxnb" (OuterVolumeSpecName: "kube-api-access-vbxnb") pod "f91611fc-84cb-4a52-8943-b4a5c7481f45" (UID: "f91611fc-84cb-4a52-8943-b4a5c7481f45"). InnerVolumeSpecName "kube-api-access-vbxnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.749491 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f91611fc-84cb-4a52-8943-b4a5c7481f45" (UID: "f91611fc-84cb-4a52-8943-b4a5c7481f45"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.774069 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-config" (OuterVolumeSpecName: "config") pod "f91611fc-84cb-4a52-8943-b4a5c7481f45" (UID: "f91611fc-84cb-4a52-8943-b4a5c7481f45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.774250 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f91611fc-84cb-4a52-8943-b4a5c7481f45" (UID: "f91611fc-84cb-4a52-8943-b4a5c7481f45"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.781114 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f91611fc-84cb-4a52-8943-b4a5c7481f45" (UID: "f91611fc-84cb-4a52-8943-b4a5c7481f45"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.788953 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.788988 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbxnb\" (UniqueName: \"kubernetes.io/projected/f91611fc-84cb-4a52-8943-b4a5c7481f45-kube-api-access-vbxnb\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.789000 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.789010 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.789020 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.796866 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f91611fc-84cb-4a52-8943-b4a5c7481f45" (UID: "f91611fc-84cb-4a52-8943-b4a5c7481f45"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.888423 4932 generic.go:334] "Generic (PLEG): container finished" podID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerID="6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632" exitCode=0 Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.888485 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" event={"ID":"f91611fc-84cb-4a52-8943-b4a5c7481f45","Type":"ContainerDied","Data":"6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632"} Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.888517 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" event={"ID":"f91611fc-84cb-4a52-8943-b4a5c7481f45","Type":"ContainerDied","Data":"655d5fb141738aad0155e62442b9035066c7a9ec2985b3b96a40dbf2d8892c36"} Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.888534 4932 scope.go:117] "RemoveContainer" containerID="6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.888768 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.896156 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.926118 4932 scope.go:117] "RemoveContainer" containerID="9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.937941 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c95b7c697-ptvr7"] Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.948488 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c95b7c697-ptvr7"] Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.957073 4932 scope.go:117] "RemoveContainer" containerID="6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632" Feb 18 19:58:24 crc kubenswrapper[4932]: E0218 19:58:24.957868 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632\": container with ID starting with 6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632 not found: ID does not exist" containerID="6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.957917 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632"} err="failed to get container status \"6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632\": rpc error: code = NotFound desc = could not find container \"6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632\": container with ID starting with 6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632 not found: ID does not exist" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.957964 4932 scope.go:117] "RemoveContainer" containerID="9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e" Feb 18 19:58:24 crc kubenswrapper[4932]: E0218 19:58:24.958450 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e\": container with ID starting with 9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e not found: ID does not exist" containerID="9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.958480 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e"} err="failed to get container status \"9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e\": rpc error: code = NotFound desc = could not find container \"9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e\": container with ID starting with 9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e not found: ID does not exist" Feb 18 19:58:25 crc kubenswrapper[4932]: I0218 19:58:25.069884 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d5b5f5475-czsf7"] Feb 18 19:58:25 crc kubenswrapper[4932]: I0218 19:58:25.197090 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91611fc-84cb-4a52-8943-b4a5c7481f45" path="/var/lib/kubelet/pods/f91611fc-84cb-4a52-8943-b4a5c7481f45/volumes" Feb 18 19:58:25 crc kubenswrapper[4932]: I0218 19:58:25.901779 4932 generic.go:334] "Generic (PLEG): container finished" podID="f60b5155-406c-4c95-9848-2792faba2235" containerID="3a57ec4bc725cd3028981967c5dd1616d9b120d3aa5bb3014525c1e775a6bf41" exitCode=0 Feb 18 19:58:25 crc kubenswrapper[4932]: I0218 19:58:25.902125 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" event={"ID":"f60b5155-406c-4c95-9848-2792faba2235","Type":"ContainerDied","Data":"3a57ec4bc725cd3028981967c5dd1616d9b120d3aa5bb3014525c1e775a6bf41"} Feb 18 19:58:25 crc kubenswrapper[4932]: I0218 19:58:25.902158 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" event={"ID":"f60b5155-406c-4c95-9848-2792faba2235","Type":"ContainerStarted","Data":"634def3d091823f021dfcf5822b341cfc185c7fcb4324aea6b4a44455cbbe7db"} Feb 18 19:58:26 crc kubenswrapper[4932]: I0218 19:58:26.912961 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" event={"ID":"f60b5155-406c-4c95-9848-2792faba2235","Type":"ContainerStarted","Data":"edc9be894ecf757f4f8758c1d70ace3924bc64d1c2b1b352cd0d88079cc0d516"} Feb 18 19:58:26 crc kubenswrapper[4932]: I0218 19:58:26.915193 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:26 crc kubenswrapper[4932]: I0218 19:58:26.959049 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" podStartSLOduration=2.959019516 podStartE2EDuration="2.959019516s" podCreationTimestamp="2026-02-18 19:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:58:26.954487263 +0000 UTC m=+1470.536442108" watchObservedRunningTime="2026-02-18 19:58:26.959019516 +0000 UTC m=+1470.540974361" Feb 18 19:58:34 crc kubenswrapper[4932]: I0218 19:58:34.591449 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:34 crc kubenswrapper[4932]: I0218 19:58:34.664652 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9ff754475-2bjzt"] Feb 18 19:58:34 crc kubenswrapper[4932]: I0218 19:58:34.665097 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" podUID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerName="dnsmasq-dns" containerID="cri-o://40b4bfc44e288dd7c847df3dd0b2a945f9df48fd611e15411b34cc995f0f85cf" gracePeriod=10 Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.019459 4932 generic.go:334] "Generic (PLEG): container finished" podID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerID="40b4bfc44e288dd7c847df3dd0b2a945f9df48fd611e15411b34cc995f0f85cf" exitCode=0 Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.019545 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" event={"ID":"96b80a13-2da6-4c91-a09d-1e935313a13f","Type":"ContainerDied","Data":"40b4bfc44e288dd7c847df3dd0b2a945f9df48fd611e15411b34cc995f0f85cf"} Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.235275 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.321862 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-sb\") pod \"96b80a13-2da6-4c91-a09d-1e935313a13f\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.321948 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-openstack-edpm-ipam\") pod \"96b80a13-2da6-4c91-a09d-1e935313a13f\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.322014 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-swift-storage-0\") pod \"96b80a13-2da6-4c91-a09d-1e935313a13f\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.322098 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-config\") pod \"96b80a13-2da6-4c91-a09d-1e935313a13f\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.322270 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pxmq\" (UniqueName: \"kubernetes.io/projected/96b80a13-2da6-4c91-a09d-1e935313a13f-kube-api-access-9pxmq\") pod \"96b80a13-2da6-4c91-a09d-1e935313a13f\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.322502 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-nb\") pod \"96b80a13-2da6-4c91-a09d-1e935313a13f\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.322649 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-svc\") pod \"96b80a13-2da6-4c91-a09d-1e935313a13f\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.335454 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b80a13-2da6-4c91-a09d-1e935313a13f-kube-api-access-9pxmq" (OuterVolumeSpecName: "kube-api-access-9pxmq") pod "96b80a13-2da6-4c91-a09d-1e935313a13f" (UID: "96b80a13-2da6-4c91-a09d-1e935313a13f"). InnerVolumeSpecName "kube-api-access-9pxmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.385333 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "96b80a13-2da6-4c91-a09d-1e935313a13f" (UID: "96b80a13-2da6-4c91-a09d-1e935313a13f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.393494 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "96b80a13-2da6-4c91-a09d-1e935313a13f" (UID: "96b80a13-2da6-4c91-a09d-1e935313a13f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.398839 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-config" (OuterVolumeSpecName: "config") pod "96b80a13-2da6-4c91-a09d-1e935313a13f" (UID: "96b80a13-2da6-4c91-a09d-1e935313a13f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.399053 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "96b80a13-2da6-4c91-a09d-1e935313a13f" (UID: "96b80a13-2da6-4c91-a09d-1e935313a13f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.405408 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "96b80a13-2da6-4c91-a09d-1e935313a13f" (UID: "96b80a13-2da6-4c91-a09d-1e935313a13f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.410673 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "96b80a13-2da6-4c91-a09d-1e935313a13f" (UID: "96b80a13-2da6-4c91-a09d-1e935313a13f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.425800 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.425837 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pxmq\" (UniqueName: \"kubernetes.io/projected/96b80a13-2da6-4c91-a09d-1e935313a13f-kube-api-access-9pxmq\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.425849 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.425859 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.425867 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.425877 4932 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.425918 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:36 crc kubenswrapper[4932]: I0218 19:58:36.029589 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" event={"ID":"96b80a13-2da6-4c91-a09d-1e935313a13f","Type":"ContainerDied","Data":"9e066ae757e52496269799f9e7d2df6157f05d0842da76118feae5596927b07d"} Feb 18 19:58:36 crc kubenswrapper[4932]: I0218 19:58:36.029670 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:36 crc kubenswrapper[4932]: I0218 19:58:36.029821 4932 scope.go:117] "RemoveContainer" containerID="40b4bfc44e288dd7c847df3dd0b2a945f9df48fd611e15411b34cc995f0f85cf" Feb 18 19:58:36 crc kubenswrapper[4932]: I0218 19:58:36.055919 4932 scope.go:117] "RemoveContainer" containerID="296162c2e2b6e7e74bfb52a7dd47f5e5692cdadf7b2924591218a8984d84e2df" Feb 18 19:58:36 crc kubenswrapper[4932]: I0218 19:58:36.066384 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9ff754475-2bjzt"] Feb 18 19:58:36 crc kubenswrapper[4932]: I0218 19:58:36.076715 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9ff754475-2bjzt"] Feb 18 19:58:37 crc kubenswrapper[4932]: I0218 19:58:37.196147 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b80a13-2da6-4c91-a09d-1e935313a13f" path="/var/lib/kubelet/pods/96b80a13-2da6-4c91-a09d-1e935313a13f/volumes" Feb 18 19:58:40 crc kubenswrapper[4932]: I0218 19:58:40.072782 4932 generic.go:334] "Generic (PLEG): container finished" podID="d466e51b-87dc-413f-aeb2-f3566a46eeb5" containerID="fdbbac4e492980e8a9121b26a7487b983c016efa01fd41436c950b17336cea34" exitCode=0 Feb 18 19:58:40 crc kubenswrapper[4932]: I0218 19:58:40.072903 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d466e51b-87dc-413f-aeb2-f3566a46eeb5","Type":"ContainerDied","Data":"fdbbac4e492980e8a9121b26a7487b983c016efa01fd41436c950b17336cea34"} Feb 18 19:58:40 crc kubenswrapper[4932]: I0218 19:58:40.074804 4932 generic.go:334] "Generic (PLEG): container finished" podID="da761aa0-8599-4aee-9078-ecaf2a04f259" containerID="1b8037c10dd0ae5363e01ddd9be7d861df2dac91332a086e6a5ad5c81c97cf0c" exitCode=0 Feb 18 19:58:40 crc kubenswrapper[4932]: I0218 19:58:40.075027 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da761aa0-8599-4aee-9078-ecaf2a04f259","Type":"ContainerDied","Data":"1b8037c10dd0ae5363e01ddd9be7d861df2dac91332a086e6a5ad5c81c97cf0c"} Feb 18 19:58:41 crc kubenswrapper[4932]: I0218 19:58:41.094272 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d466e51b-87dc-413f-aeb2-f3566a46eeb5","Type":"ContainerStarted","Data":"660ff38fc3ee70e8c08d06e92bb83529e53d107b967135ac9f4e35aec18b3c1f"} Feb 18 19:58:41 crc kubenswrapper[4932]: I0218 19:58:41.096540 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 19:58:41 crc kubenswrapper[4932]: I0218 19:58:41.100605 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da761aa0-8599-4aee-9078-ecaf2a04f259","Type":"ContainerStarted","Data":"18213112e6dd7afd3658f77b264117ddcd17389ac45045600896335fdb1ba2bd"} Feb 18 19:58:41 crc kubenswrapper[4932]: I0218 19:58:41.100848 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:41 crc kubenswrapper[4932]: I0218 19:58:41.127486 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.12745948 podStartE2EDuration="38.12745948s" podCreationTimestamp="2026-02-18 19:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:58:41.122786014 +0000 UTC m=+1484.704740869" watchObservedRunningTime="2026-02-18 19:58:41.12745948 +0000 UTC m=+1484.709414325" Feb 18 19:58:41 crc kubenswrapper[4932]: I0218 19:58:41.157398 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.157377327 podStartE2EDuration="37.157377327s" podCreationTimestamp="2026-02-18 19:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:58:41.147807898 +0000 UTC m=+1484.729762743" watchObservedRunningTime="2026-02-18 19:58:41.157377327 +0000 UTC m=+1484.739332172" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.871386 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn"] Feb 18 19:58:46 crc kubenswrapper[4932]: E0218 19:58:46.873649 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerName="dnsmasq-dns" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.876679 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerName="dnsmasq-dns" Feb 18 19:58:46 crc kubenswrapper[4932]: E0218 19:58:46.876782 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerName="init" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.876849 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerName="init" Feb 18 19:58:46 crc kubenswrapper[4932]: E0218 19:58:46.876925 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerName="init" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.876989 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerName="init" Feb 18 19:58:46 crc kubenswrapper[4932]: E0218 19:58:46.877087 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerName="dnsmasq-dns" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.877275 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerName="dnsmasq-dns" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.877723 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerName="dnsmasq-dns" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.877833 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerName="dnsmasq-dns" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.878750 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.881077 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.881468 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.881656 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.881557 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.921280 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn"] Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.973148 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.973357 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.973489 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.973677 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xcv5\" (UniqueName: \"kubernetes.io/projected/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-kube-api-access-2xcv5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.075262 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.075337 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.075417 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.075540 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xcv5\" (UniqueName: \"kubernetes.io/projected/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-kube-api-access-2xcv5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.081600 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.082302 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.085739 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.094234 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xcv5\" (UniqueName: \"kubernetes.io/projected/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-kube-api-access-2xcv5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.242590 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.914454 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn"] Feb 18 19:58:48 crc kubenswrapper[4932]: I0218 19:58:48.168407 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" event={"ID":"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6","Type":"ContainerStarted","Data":"67773ceef267d72504d994b948ac20b7519b184fb9a0f5ce9474a58a999c5b39"} Feb 18 19:58:54 crc kubenswrapper[4932]: I0218 19:58:54.499585 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 19:58:55 crc kubenswrapper[4932]: I0218 19:58:55.117497 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:57 crc kubenswrapper[4932]: I0218 19:58:57.278689 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" event={"ID":"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6","Type":"ContainerStarted","Data":"4264b6f8a7461be6203ec28a9363259faa2db962dc1de52b367238e88dcc3b36"} Feb 18 19:58:57 crc kubenswrapper[4932]: I0218 19:58:57.299595 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" podStartSLOduration=2.462644248 podStartE2EDuration="11.299572454s" podCreationTimestamp="2026-02-18 19:58:46 +0000 UTC" firstStartedPulling="2026-02-18 19:58:47.927967854 +0000 UTC m=+1491.509922699" lastFinishedPulling="2026-02-18 19:58:56.76489606 +0000 UTC m=+1500.346850905" observedRunningTime="2026-02-18 19:58:57.298673552 +0000 UTC m=+1500.880628417" watchObservedRunningTime="2026-02-18 19:58:57.299572454 +0000 UTC m=+1500.881527299" Feb 18 19:58:57 crc kubenswrapper[4932]: I0218 19:58:57.606576 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:58:57 crc kubenswrapper[4932]: I0218 19:58:57.606902 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:59:02 crc kubenswrapper[4932]: I0218 19:59:02.853473 4932 scope.go:117] "RemoveContainer" containerID="0c56a84dec06134e2f4b962a1631f1595e0dce10e33a951ccd5303bade9b2a6e" Feb 18 19:59:02 crc kubenswrapper[4932]: I0218 19:59:02.882010 4932 scope.go:117] "RemoveContainer" containerID="f9f90dc57da26de1688aea88788204ac610c6fa3970ee4965c6add216640da6a" Feb 18 19:59:02 crc kubenswrapper[4932]: I0218 19:59:02.936248 4932 scope.go:117] "RemoveContainer" containerID="da426b82651806673889b52158bea2dd7d720c322fbc355879403c25885c3ec1" Feb 18 19:59:07 crc kubenswrapper[4932]: I0218 19:59:07.392951 4932 generic.go:334] "Generic (PLEG): container finished" podID="b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" containerID="4264b6f8a7461be6203ec28a9363259faa2db962dc1de52b367238e88dcc3b36" exitCode=0 Feb 18 19:59:07 crc kubenswrapper[4932]: I0218 19:59:07.393693 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" event={"ID":"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6","Type":"ContainerDied","Data":"4264b6f8a7461be6203ec28a9363259faa2db962dc1de52b367238e88dcc3b36"} Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.081641 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.259648 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-ssh-key-openstack-edpm-ipam\") pod \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.259775 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xcv5\" (UniqueName: \"kubernetes.io/projected/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-kube-api-access-2xcv5\") pod \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.259921 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-repo-setup-combined-ca-bundle\") pod \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.260005 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-inventory\") pod \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.269169 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-kube-api-access-2xcv5" (OuterVolumeSpecName: "kube-api-access-2xcv5") pod "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" (UID: "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6"). InnerVolumeSpecName "kube-api-access-2xcv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.269992 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" (UID: "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.300009 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" (UID: "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.313582 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-inventory" (OuterVolumeSpecName: "inventory") pod "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" (UID: "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.366319 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xcv5\" (UniqueName: \"kubernetes.io/projected/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-kube-api-access-2xcv5\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.366842 4932 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.366926 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.367005 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.443992 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" event={"ID":"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6","Type":"ContainerDied","Data":"67773ceef267d72504d994b948ac20b7519b184fb9a0f5ce9474a58a999c5b39"} Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.444056 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67773ceef267d72504d994b948ac20b7519b184fb9a0f5ce9474a58a999c5b39" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.444140 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.518402 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4"] Feb 18 19:59:09 crc kubenswrapper[4932]: E0218 19:59:09.518776 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.518794 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.519000 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.519766 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.522201 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.522282 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.522824 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.522919 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.547916 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4"] Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.677721 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.677933 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.678217 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqvcz\" (UniqueName: \"kubernetes.io/projected/645d3722-79e7-4b78-a24d-2f5eca6c2714-kube-api-access-cqvcz\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.780913 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqvcz\" (UniqueName: \"kubernetes.io/projected/645d3722-79e7-4b78-a24d-2f5eca6c2714-kube-api-access-cqvcz\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.781066 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.782455 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.784964 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.785393 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.797307 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqvcz\" (UniqueName: \"kubernetes.io/projected/645d3722-79e7-4b78-a24d-2f5eca6c2714-kube-api-access-cqvcz\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.840948 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:10 crc kubenswrapper[4932]: I0218 19:59:10.415334 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4"] Feb 18 19:59:10 crc kubenswrapper[4932]: I0218 19:59:10.459831 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" event={"ID":"645d3722-79e7-4b78-a24d-2f5eca6c2714","Type":"ContainerStarted","Data":"1c57b153b6380278224723c713679cd5d8ba06ea370a3ca6688d6ec583ada080"} Feb 18 19:59:11 crc kubenswrapper[4932]: I0218 19:59:11.472527 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" event={"ID":"645d3722-79e7-4b78-a24d-2f5eca6c2714","Type":"ContainerStarted","Data":"61d86c09c955c0e8549b971a80757dba1d7bb26249376d95200ffd9c4ae8a004"} Feb 18 19:59:11 crc kubenswrapper[4932]: I0218 19:59:11.494464 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" podStartSLOduration=2.063098996 podStartE2EDuration="2.494432376s" podCreationTimestamp="2026-02-18 19:59:09 +0000 UTC" firstStartedPulling="2026-02-18 19:59:10.422202187 +0000 UTC m=+1514.004157032" lastFinishedPulling="2026-02-18 19:59:10.853535567 +0000 UTC m=+1514.435490412" observedRunningTime="2026-02-18 19:59:11.487577806 +0000 UTC m=+1515.069532651" watchObservedRunningTime="2026-02-18 19:59:11.494432376 +0000 UTC m=+1515.076387231" Feb 18 19:59:13 crc kubenswrapper[4932]: I0218 19:59:13.493718 4932 generic.go:334] "Generic (PLEG): container finished" podID="645d3722-79e7-4b78-a24d-2f5eca6c2714" containerID="61d86c09c955c0e8549b971a80757dba1d7bb26249376d95200ffd9c4ae8a004" exitCode=0 Feb 18 19:59:13 crc kubenswrapper[4932]: I0218 19:59:13.493800 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" event={"ID":"645d3722-79e7-4b78-a24d-2f5eca6c2714","Type":"ContainerDied","Data":"61d86c09c955c0e8549b971a80757dba1d7bb26249376d95200ffd9c4ae8a004"} Feb 18 19:59:14 crc kubenswrapper[4932]: I0218 19:59:14.995135 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.009798 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqvcz\" (UniqueName: \"kubernetes.io/projected/645d3722-79e7-4b78-a24d-2f5eca6c2714-kube-api-access-cqvcz\") pod \"645d3722-79e7-4b78-a24d-2f5eca6c2714\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.009869 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-ssh-key-openstack-edpm-ipam\") pod \"645d3722-79e7-4b78-a24d-2f5eca6c2714\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.009939 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-inventory\") pod \"645d3722-79e7-4b78-a24d-2f5eca6c2714\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.019371 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645d3722-79e7-4b78-a24d-2f5eca6c2714-kube-api-access-cqvcz" (OuterVolumeSpecName: "kube-api-access-cqvcz") pod "645d3722-79e7-4b78-a24d-2f5eca6c2714" (UID: "645d3722-79e7-4b78-a24d-2f5eca6c2714"). InnerVolumeSpecName "kube-api-access-cqvcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.041004 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "645d3722-79e7-4b78-a24d-2f5eca6c2714" (UID: "645d3722-79e7-4b78-a24d-2f5eca6c2714"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.046143 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-inventory" (OuterVolumeSpecName: "inventory") pod "645d3722-79e7-4b78-a24d-2f5eca6c2714" (UID: "645d3722-79e7-4b78-a24d-2f5eca6c2714"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.113344 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.113376 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.113385 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqvcz\" (UniqueName: \"kubernetes.io/projected/645d3722-79e7-4b78-a24d-2f5eca6c2714-kube-api-access-cqvcz\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.519654 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" event={"ID":"645d3722-79e7-4b78-a24d-2f5eca6c2714","Type":"ContainerDied","Data":"1c57b153b6380278224723c713679cd5d8ba06ea370a3ca6688d6ec583ada080"} Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.519713 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c57b153b6380278224723c713679cd5d8ba06ea370a3ca6688d6ec583ada080" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.519746 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.625437 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52"] Feb 18 19:59:15 crc kubenswrapper[4932]: E0218 19:59:15.627250 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="645d3722-79e7-4b78-a24d-2f5eca6c2714" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.627407 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="645d3722-79e7-4b78-a24d-2f5eca6c2714" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.628153 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="645d3722-79e7-4b78-a24d-2f5eca6c2714" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.629652 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.676758 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.677157 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.677281 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.677293 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.695925 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52"] Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.829505 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.829621 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.830011 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.830481 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkgqk\" (UniqueName: \"kubernetes.io/projected/dbe60214-3673-4c3b-a043-ee483870fe48-kube-api-access-pkgqk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.932766 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.932966 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkgqk\" (UniqueName: \"kubernetes.io/projected/dbe60214-3673-4c3b-a043-ee483870fe48-kube-api-access-pkgqk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.933067 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.933167 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.942035 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.944296 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.954581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.961906 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkgqk\" (UniqueName: \"kubernetes.io/projected/dbe60214-3673-4c3b-a043-ee483870fe48-kube-api-access-pkgqk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:16 crc kubenswrapper[4932]: I0218 19:59:16.010964 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:16 crc kubenswrapper[4932]: I0218 19:59:16.525711 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52"] Feb 18 19:59:16 crc kubenswrapper[4932]: W0218 19:59:16.531101 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbe60214_3673_4c3b_a043_ee483870fe48.slice/crio-388d5a8d9075b522b3396514316338221be10e04a6b2d65c99ef9f1e91e5c2b3 WatchSource:0}: Error finding container 388d5a8d9075b522b3396514316338221be10e04a6b2d65c99ef9f1e91e5c2b3: Status 404 returned error can't find the container with id 388d5a8d9075b522b3396514316338221be10e04a6b2d65c99ef9f1e91e5c2b3 Feb 18 19:59:17 crc kubenswrapper[4932]: I0218 19:59:17.548245 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" event={"ID":"dbe60214-3673-4c3b-a043-ee483870fe48","Type":"ContainerStarted","Data":"c3b8f5c0d6d86b3ea458a9947928d998d7a190d335a7fcd6011fecfca46d5ad1"} Feb 18 19:59:17 crc kubenswrapper[4932]: I0218 19:59:17.548629 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" event={"ID":"dbe60214-3673-4c3b-a043-ee483870fe48","Type":"ContainerStarted","Data":"388d5a8d9075b522b3396514316338221be10e04a6b2d65c99ef9f1e91e5c2b3"} Feb 18 19:59:17 crc kubenswrapper[4932]: I0218 19:59:17.585510 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" podStartSLOduration=2.198518862 podStartE2EDuration="2.585489558s" podCreationTimestamp="2026-02-18 19:59:15 +0000 UTC" firstStartedPulling="2026-02-18 19:59:16.536797865 +0000 UTC m=+1520.118752710" lastFinishedPulling="2026-02-18 19:59:16.923768561 +0000 UTC m=+1520.505723406" observedRunningTime="2026-02-18 19:59:17.568621428 +0000 UTC m=+1521.150576293" watchObservedRunningTime="2026-02-18 19:59:17.585489558 +0000 UTC m=+1521.167444403" Feb 18 19:59:27 crc kubenswrapper[4932]: I0218 19:59:27.606660 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:59:27 crc kubenswrapper[4932]: I0218 19:59:27.607296 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.015638 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2ml42"] Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.018711 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.028749 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ml42"] Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.084877 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-utilities\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.085258 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8rf5\" (UniqueName: \"kubernetes.io/projected/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-kube-api-access-h8rf5\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.085422 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-catalog-content\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.190209 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-utilities\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.190448 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8rf5\" (UniqueName: \"kubernetes.io/projected/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-kube-api-access-h8rf5\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.190594 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-catalog-content\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.191354 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-utilities\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.192015 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-catalog-content\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.255530 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8rf5\" (UniqueName: \"kubernetes.io/projected/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-kube-api-access-h8rf5\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.349481 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.828127 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ml42"] Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.931825 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ml42" event={"ID":"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d","Type":"ContainerStarted","Data":"645ec31d43647f59f81cb86a1b8ef96d7e32c5b0c176847cf45357ed914898d2"} Feb 18 19:59:53 crc kubenswrapper[4932]: I0218 19:59:53.946851 4932 generic.go:334] "Generic (PLEG): container finished" podID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerID="f882eadc7a59782637627308960cb1ee779a8f928bf56cc5840ac520f5d219a4" exitCode=0 Feb 18 19:59:53 crc kubenswrapper[4932]: I0218 19:59:53.946931 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ml42" event={"ID":"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d","Type":"ContainerDied","Data":"f882eadc7a59782637627308960cb1ee779a8f928bf56cc5840ac520f5d219a4"} Feb 18 19:59:54 crc kubenswrapper[4932]: I0218 19:59:54.958152 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ml42" event={"ID":"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d","Type":"ContainerStarted","Data":"ac65f965b73ef12472cc016e48da1ec1c719ea417bbeafb7d9d02caeb08345dc"} Feb 18 19:59:55 crc kubenswrapper[4932]: I0218 19:59:55.967563 4932 generic.go:334] "Generic (PLEG): container finished" podID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerID="ac65f965b73ef12472cc016e48da1ec1c719ea417bbeafb7d9d02caeb08345dc" exitCode=0 Feb 18 19:59:55 crc kubenswrapper[4932]: I0218 19:59:55.967633 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ml42" event={"ID":"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d","Type":"ContainerDied","Data":"ac65f965b73ef12472cc016e48da1ec1c719ea417bbeafb7d9d02caeb08345dc"} Feb 18 19:59:56 crc kubenswrapper[4932]: I0218 19:59:56.981746 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ml42" event={"ID":"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d","Type":"ContainerStarted","Data":"452de4dc305a80acacd17406ff86c4a1d00ce1e1c28fb18b02225b9eef68284c"} Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.013473 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2ml42" podStartSLOduration=3.3086679500000002 podStartE2EDuration="6.013437882s" podCreationTimestamp="2026-02-18 19:59:51 +0000 UTC" firstStartedPulling="2026-02-18 19:59:53.949270572 +0000 UTC m=+1557.531225417" lastFinishedPulling="2026-02-18 19:59:56.654040504 +0000 UTC m=+1560.235995349" observedRunningTime="2026-02-18 19:59:57.003249509 +0000 UTC m=+1560.585204414" watchObservedRunningTime="2026-02-18 19:59:57.013437882 +0000 UTC m=+1560.595392727" Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.606147 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.606472 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.606515 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.607139 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.607215 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" gracePeriod=600 Feb 18 19:59:57 crc kubenswrapper[4932]: E0218 19:59:57.746157 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.992774 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" exitCode=0 Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.992842 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d"} Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.992890 4932 scope.go:117] "RemoveContainer" containerID="691ac26b2e0eb4976dab73dc438ad2163dc0ad731157e8dbe0e2c19541cba856" Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.993583 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 19:59:57 crc kubenswrapper[4932]: E0218 19:59:57.993832 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.150653 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf"] Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.152676 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.157105 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.157250 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.190157 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf"] Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.251852 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9637eec3-3d3f-435b-9a57-ef318aa5300c-config-volume\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.252383 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9637eec3-3d3f-435b-9a57-ef318aa5300c-secret-volume\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.252495 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9nrh\" (UniqueName: \"kubernetes.io/projected/9637eec3-3d3f-435b-9a57-ef318aa5300c-kube-api-access-b9nrh\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.354827 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9637eec3-3d3f-435b-9a57-ef318aa5300c-config-volume\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.354916 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9637eec3-3d3f-435b-9a57-ef318aa5300c-secret-volume\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.354963 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9nrh\" (UniqueName: \"kubernetes.io/projected/9637eec3-3d3f-435b-9a57-ef318aa5300c-kube-api-access-b9nrh\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.357039 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9637eec3-3d3f-435b-9a57-ef318aa5300c-config-volume\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.365592 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9637eec3-3d3f-435b-9a57-ef318aa5300c-secret-volume\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.375490 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9nrh\" (UniqueName: \"kubernetes.io/projected/9637eec3-3d3f-435b-9a57-ef318aa5300c-kube-api-access-b9nrh\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.498666 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.962974 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf"] Feb 18 20:00:00 crc kubenswrapper[4932]: W0218 20:00:00.973073 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9637eec3_3d3f_435b_9a57_ef318aa5300c.slice/crio-4d7ad3acbe1c3361d0180ae0bbfa97700a5e076cc7baa5d48bab5f20885679e4 WatchSource:0}: Error finding container 4d7ad3acbe1c3361d0180ae0bbfa97700a5e076cc7baa5d48bab5f20885679e4: Status 404 returned error can't find the container with id 4d7ad3acbe1c3361d0180ae0bbfa97700a5e076cc7baa5d48bab5f20885679e4 Feb 18 20:00:01 crc kubenswrapper[4932]: I0218 20:00:01.026999 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" event={"ID":"9637eec3-3d3f-435b-9a57-ef318aa5300c","Type":"ContainerStarted","Data":"4d7ad3acbe1c3361d0180ae0bbfa97700a5e076cc7baa5d48bab5f20885679e4"} Feb 18 20:00:02 crc kubenswrapper[4932]: I0218 20:00:02.038962 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" event={"ID":"9637eec3-3d3f-435b-9a57-ef318aa5300c","Type":"ContainerStarted","Data":"a9f13f16fae2f188590028710fb520ed99f739785e726a38525e8fd3c5b3e49f"} Feb 18 20:00:02 crc kubenswrapper[4932]: I0218 20:00:02.069295 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" podStartSLOduration=2.069266996 podStartE2EDuration="2.069266996s" podCreationTimestamp="2026-02-18 20:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:00:02.056301134 +0000 UTC m=+1565.638256019" watchObservedRunningTime="2026-02-18 20:00:02.069266996 +0000 UTC m=+1565.651221841" Feb 18 20:00:02 crc kubenswrapper[4932]: I0218 20:00:02.351581 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 20:00:02 crc kubenswrapper[4932]: I0218 20:00:02.351928 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 20:00:02 crc kubenswrapper[4932]: I0218 20:00:02.424213 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 20:00:03 crc kubenswrapper[4932]: I0218 20:00:03.054514 4932 generic.go:334] "Generic (PLEG): container finished" podID="9637eec3-3d3f-435b-9a57-ef318aa5300c" containerID="a9f13f16fae2f188590028710fb520ed99f739785e726a38525e8fd3c5b3e49f" exitCode=0 Feb 18 20:00:03 crc kubenswrapper[4932]: I0218 20:00:03.055812 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" event={"ID":"9637eec3-3d3f-435b-9a57-ef318aa5300c","Type":"ContainerDied","Data":"a9f13f16fae2f188590028710fb520ed99f739785e726a38525e8fd3c5b3e49f"} Feb 18 20:00:03 crc kubenswrapper[4932]: I0218 20:00:03.150114 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 20:00:03 crc kubenswrapper[4932]: I0218 20:00:03.152113 4932 scope.go:117] "RemoveContainer" containerID="502e6556feede81a431352bd255101dc0919dfeb0d3696054c3aff0523a4cd61" Feb 18 20:00:03 crc kubenswrapper[4932]: I0218 20:00:03.229918 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ml42"] Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.458159 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.549849 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9nrh\" (UniqueName: \"kubernetes.io/projected/9637eec3-3d3f-435b-9a57-ef318aa5300c-kube-api-access-b9nrh\") pod \"9637eec3-3d3f-435b-9a57-ef318aa5300c\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.549940 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9637eec3-3d3f-435b-9a57-ef318aa5300c-config-volume\") pod \"9637eec3-3d3f-435b-9a57-ef318aa5300c\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.549971 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9637eec3-3d3f-435b-9a57-ef318aa5300c-secret-volume\") pod \"9637eec3-3d3f-435b-9a57-ef318aa5300c\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.551396 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9637eec3-3d3f-435b-9a57-ef318aa5300c-config-volume" (OuterVolumeSpecName: "config-volume") pod "9637eec3-3d3f-435b-9a57-ef318aa5300c" (UID: "9637eec3-3d3f-435b-9a57-ef318aa5300c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.556503 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9637eec3-3d3f-435b-9a57-ef318aa5300c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9637eec3-3d3f-435b-9a57-ef318aa5300c" (UID: "9637eec3-3d3f-435b-9a57-ef318aa5300c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.570787 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9637eec3-3d3f-435b-9a57-ef318aa5300c-kube-api-access-b9nrh" (OuterVolumeSpecName: "kube-api-access-b9nrh") pod "9637eec3-3d3f-435b-9a57-ef318aa5300c" (UID: "9637eec3-3d3f-435b-9a57-ef318aa5300c"). InnerVolumeSpecName "kube-api-access-b9nrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.652763 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9nrh\" (UniqueName: \"kubernetes.io/projected/9637eec3-3d3f-435b-9a57-ef318aa5300c-kube-api-access-b9nrh\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.652804 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9637eec3-3d3f-435b-9a57-ef318aa5300c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.652831 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9637eec3-3d3f-435b-9a57-ef318aa5300c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:05 crc kubenswrapper[4932]: I0218 20:00:05.126463 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:05 crc kubenswrapper[4932]: I0218 20:00:05.126802 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" event={"ID":"9637eec3-3d3f-435b-9a57-ef318aa5300c","Type":"ContainerDied","Data":"4d7ad3acbe1c3361d0180ae0bbfa97700a5e076cc7baa5d48bab5f20885679e4"} Feb 18 20:00:05 crc kubenswrapper[4932]: I0218 20:00:05.126828 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d7ad3acbe1c3361d0180ae0bbfa97700a5e076cc7baa5d48bab5f20885679e4" Feb 18 20:00:05 crc kubenswrapper[4932]: I0218 20:00:05.126569 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2ml42" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="registry-server" containerID="cri-o://452de4dc305a80acacd17406ff86c4a1d00ce1e1c28fb18b02225b9eef68284c" gracePeriod=2 Feb 18 20:00:05 crc kubenswrapper[4932]: E0218 20:00:05.289459 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9637eec3_3d3f_435b_9a57_ef318aa5300c.slice\": RecentStats: unable to find data in memory cache]" Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.142510 4932 generic.go:334] "Generic (PLEG): container finished" podID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerID="452de4dc305a80acacd17406ff86c4a1d00ce1e1c28fb18b02225b9eef68284c" exitCode=0 Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.142822 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ml42" event={"ID":"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d","Type":"ContainerDied","Data":"452de4dc305a80acacd17406ff86c4a1d00ce1e1c28fb18b02225b9eef68284c"} Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.466676 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.599516 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-catalog-content\") pod \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.599611 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8rf5\" (UniqueName: \"kubernetes.io/projected/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-kube-api-access-h8rf5\") pod \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.599763 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-utilities\") pod \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.600824 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-utilities" (OuterVolumeSpecName: "utilities") pod "8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" (UID: "8ecd50a3-15c9-4b1a-8c77-b1fb4303596d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.619079 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-kube-api-access-h8rf5" (OuterVolumeSpecName: "kube-api-access-h8rf5") pod "8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" (UID: "8ecd50a3-15c9-4b1a-8c77-b1fb4303596d"). InnerVolumeSpecName "kube-api-access-h8rf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.623617 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" (UID: "8ecd50a3-15c9-4b1a-8c77-b1fb4303596d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.701954 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.701988 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.702011 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8rf5\" (UniqueName: \"kubernetes.io/projected/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-kube-api-access-h8rf5\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:07 crc kubenswrapper[4932]: I0218 20:00:07.157988 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ml42" event={"ID":"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d","Type":"ContainerDied","Data":"645ec31d43647f59f81cb86a1b8ef96d7e32c5b0c176847cf45357ed914898d2"} Feb 18 20:00:07 crc kubenswrapper[4932]: I0218 20:00:07.158414 4932 scope.go:117] "RemoveContainer" containerID="452de4dc305a80acacd17406ff86c4a1d00ce1e1c28fb18b02225b9eef68284c" Feb 18 20:00:07 crc kubenswrapper[4932]: I0218 20:00:07.158052 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 20:00:07 crc kubenswrapper[4932]: I0218 20:00:07.189207 4932 scope.go:117] "RemoveContainer" containerID="ac65f965b73ef12472cc016e48da1ec1c719ea417bbeafb7d9d02caeb08345dc" Feb 18 20:00:07 crc kubenswrapper[4932]: I0218 20:00:07.210861 4932 scope.go:117] "RemoveContainer" containerID="f882eadc7a59782637627308960cb1ee779a8f928bf56cc5840ac520f5d219a4" Feb 18 20:00:07 crc kubenswrapper[4932]: I0218 20:00:07.222123 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ml42"] Feb 18 20:00:07 crc kubenswrapper[4932]: I0218 20:00:07.230939 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ml42"] Feb 18 20:00:09 crc kubenswrapper[4932]: I0218 20:00:09.205513 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" path="/var/lib/kubelet/pods/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d/volumes" Feb 18 20:00:10 crc kubenswrapper[4932]: I0218 20:00:10.179724 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:00:10 crc kubenswrapper[4932]: E0218 20:00:10.180274 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:00:24 crc kubenswrapper[4932]: I0218 20:00:24.180699 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:00:24 crc kubenswrapper[4932]: E0218 20:00:24.181823 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:00:35 crc kubenswrapper[4932]: I0218 20:00:35.180352 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:00:35 crc kubenswrapper[4932]: E0218 20:00:35.181348 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:00:49 crc kubenswrapper[4932]: I0218 20:00:49.179507 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:00:49 crc kubenswrapper[4932]: E0218 20:00:49.180758 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.162965 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524081-jsnmw"] Feb 18 20:01:00 crc kubenswrapper[4932]: E0218 20:01:00.164013 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="extract-utilities" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.164030 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="extract-utilities" Feb 18 20:01:00 crc kubenswrapper[4932]: E0218 20:01:00.164048 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9637eec3-3d3f-435b-9a57-ef318aa5300c" containerName="collect-profiles" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.164056 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9637eec3-3d3f-435b-9a57-ef318aa5300c" containerName="collect-profiles" Feb 18 20:01:00 crc kubenswrapper[4932]: E0218 20:01:00.164076 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="extract-content" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.164084 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="extract-content" Feb 18 20:01:00 crc kubenswrapper[4932]: E0218 20:01:00.164100 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="registry-server" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.164108 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="registry-server" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.164376 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="registry-server" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.164410 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9637eec3-3d3f-435b-9a57-ef318aa5300c" containerName="collect-profiles" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.165236 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.175438 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524081-jsnmw"] Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.249671 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-fernet-keys\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.250345 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-combined-ca-bundle\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.250419 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qq8g\" (UniqueName: \"kubernetes.io/projected/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-kube-api-access-7qq8g\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.250474 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-config-data\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.352628 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-fernet-keys\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.352791 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-combined-ca-bundle\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.352835 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qq8g\" (UniqueName: \"kubernetes.io/projected/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-kube-api-access-7qq8g\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.352889 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-config-data\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.363563 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-fernet-keys\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.365800 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-config-data\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.366514 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-combined-ca-bundle\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.376456 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qq8g\" (UniqueName: \"kubernetes.io/projected/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-kube-api-access-7qq8g\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.493154 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.959751 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524081-jsnmw"] Feb 18 20:01:01 crc kubenswrapper[4932]: I0218 20:01:01.818036 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-jsnmw" event={"ID":"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a","Type":"ContainerStarted","Data":"4b4e199b09552b73ff8acfbf5edd85042cafa10c4171859d3dfd7a31f2670d3d"} Feb 18 20:01:01 crc kubenswrapper[4932]: I0218 20:01:01.818096 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-jsnmw" event={"ID":"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a","Type":"ContainerStarted","Data":"59f3482ad312ba2973909cb9359693447542d72ec11c1a10a1a59349c104baa5"} Feb 18 20:01:01 crc kubenswrapper[4932]: I0218 20:01:01.835669 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524081-jsnmw" podStartSLOduration=1.835652573 podStartE2EDuration="1.835652573s" podCreationTimestamp="2026-02-18 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:01:01.832671829 +0000 UTC m=+1625.414626714" watchObservedRunningTime="2026-02-18 20:01:01.835652573 +0000 UTC m=+1625.417607428" Feb 18 20:01:02 crc kubenswrapper[4932]: I0218 20:01:02.179239 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:01:02 crc kubenswrapper[4932]: E0218 20:01:02.179482 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:01:03 crc kubenswrapper[4932]: I0218 20:01:03.271633 4932 scope.go:117] "RemoveContainer" containerID="2dd4d65476d1505ac595577a77e37ccd6902dc5b61d39daf8b0813fba6426e5c" Feb 18 20:01:03 crc kubenswrapper[4932]: I0218 20:01:03.305634 4932 scope.go:117] "RemoveContainer" containerID="b1963dc8bdedaa6e9c39260e4aa454ec9b1f122ff3e931be78b28e85782c2717" Feb 18 20:01:04 crc kubenswrapper[4932]: I0218 20:01:04.848886 4932 generic.go:334] "Generic (PLEG): container finished" podID="0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" containerID="4b4e199b09552b73ff8acfbf5edd85042cafa10c4171859d3dfd7a31f2670d3d" exitCode=0 Feb 18 20:01:04 crc kubenswrapper[4932]: I0218 20:01:04.848974 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-jsnmw" event={"ID":"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a","Type":"ContainerDied","Data":"4b4e199b09552b73ff8acfbf5edd85042cafa10c4171859d3dfd7a31f2670d3d"} Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.276166 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.385906 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-config-data\") pod \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.386371 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-combined-ca-bundle\") pod \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.386936 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-fernet-keys\") pod \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.386982 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qq8g\" (UniqueName: \"kubernetes.io/projected/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-kube-api-access-7qq8g\") pod \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.393137 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-kube-api-access-7qq8g" (OuterVolumeSpecName: "kube-api-access-7qq8g") pod "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" (UID: "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a"). InnerVolumeSpecName "kube-api-access-7qq8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.393489 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" (UID: "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.414905 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" (UID: "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.441808 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-config-data" (OuterVolumeSpecName: "config-data") pod "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" (UID: "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.490030 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qq8g\" (UniqueName: \"kubernetes.io/projected/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-kube-api-access-7qq8g\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.490073 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.490087 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.490098 4932 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.874720 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-jsnmw" event={"ID":"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a","Type":"ContainerDied","Data":"59f3482ad312ba2973909cb9359693447542d72ec11c1a10a1a59349c104baa5"} Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.874766 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59f3482ad312ba2973909cb9359693447542d72ec11c1a10a1a59349c104baa5" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.874771 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:17 crc kubenswrapper[4932]: I0218 20:01:17.193819 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:01:17 crc kubenswrapper[4932]: E0218 20:01:17.194903 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:01:28 crc kubenswrapper[4932]: I0218 20:01:28.178728 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:01:28 crc kubenswrapper[4932]: E0218 20:01:28.179660 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:01:40 crc kubenswrapper[4932]: I0218 20:01:40.179044 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:01:40 crc kubenswrapper[4932]: E0218 20:01:40.179927 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:01:52 crc kubenswrapper[4932]: I0218 20:01:52.179637 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:01:52 crc kubenswrapper[4932]: E0218 20:01:52.180933 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:02:03 crc kubenswrapper[4932]: I0218 20:02:03.416843 4932 scope.go:117] "RemoveContainer" containerID="f581b8c9ce44e42d3ff03f376a0f68bc8c6d3dd65d58f6d7b80411f3452dd5a6" Feb 18 20:02:05 crc kubenswrapper[4932]: I0218 20:02:05.180981 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:02:05 crc kubenswrapper[4932]: E0218 20:02:05.181611 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:02:17 crc kubenswrapper[4932]: I0218 20:02:17.187718 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:02:17 crc kubenswrapper[4932]: E0218 20:02:17.188639 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:02:20 crc kubenswrapper[4932]: I0218 20:02:20.007492 4932 generic.go:334] "Generic (PLEG): container finished" podID="dbe60214-3673-4c3b-a043-ee483870fe48" containerID="c3b8f5c0d6d86b3ea458a9947928d998d7a190d335a7fcd6011fecfca46d5ad1" exitCode=0 Feb 18 20:02:20 crc kubenswrapper[4932]: I0218 20:02:20.007571 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" event={"ID":"dbe60214-3673-4c3b-a043-ee483870fe48","Type":"ContainerDied","Data":"c3b8f5c0d6d86b3ea458a9947928d998d7a190d335a7fcd6011fecfca46d5ad1"} Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.556733 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.664306 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-ssh-key-openstack-edpm-ipam\") pod \"dbe60214-3673-4c3b-a043-ee483870fe48\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.664355 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-inventory\") pod \"dbe60214-3673-4c3b-a043-ee483870fe48\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.664500 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkgqk\" (UniqueName: \"kubernetes.io/projected/dbe60214-3673-4c3b-a043-ee483870fe48-kube-api-access-pkgqk\") pod \"dbe60214-3673-4c3b-a043-ee483870fe48\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.664545 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-bootstrap-combined-ca-bundle\") pod \"dbe60214-3673-4c3b-a043-ee483870fe48\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.669935 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe60214-3673-4c3b-a043-ee483870fe48-kube-api-access-pkgqk" (OuterVolumeSpecName: "kube-api-access-pkgqk") pod "dbe60214-3673-4c3b-a043-ee483870fe48" (UID: "dbe60214-3673-4c3b-a043-ee483870fe48"). InnerVolumeSpecName "kube-api-access-pkgqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.670480 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "dbe60214-3673-4c3b-a043-ee483870fe48" (UID: "dbe60214-3673-4c3b-a043-ee483870fe48"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.696469 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dbe60214-3673-4c3b-a043-ee483870fe48" (UID: "dbe60214-3673-4c3b-a043-ee483870fe48"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.708221 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-inventory" (OuterVolumeSpecName: "inventory") pod "dbe60214-3673-4c3b-a043-ee483870fe48" (UID: "dbe60214-3673-4c3b-a043-ee483870fe48"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.766685 4932 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.766723 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.766737 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.766748 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkgqk\" (UniqueName: \"kubernetes.io/projected/dbe60214-3673-4c3b-a043-ee483870fe48-kube-api-access-pkgqk\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.032231 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" event={"ID":"dbe60214-3673-4c3b-a043-ee483870fe48","Type":"ContainerDied","Data":"388d5a8d9075b522b3396514316338221be10e04a6b2d65c99ef9f1e91e5c2b3"} Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.032553 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="388d5a8d9075b522b3396514316338221be10e04a6b2d65c99ef9f1e91e5c2b3" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.032451 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.131052 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf"] Feb 18 20:02:22 crc kubenswrapper[4932]: E0218 20:02:22.133391 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe60214-3673-4c3b-a043-ee483870fe48" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.133422 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe60214-3673-4c3b-a043-ee483870fe48" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 20:02:22 crc kubenswrapper[4932]: E0218 20:02:22.133443 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" containerName="keystone-cron" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.133452 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" containerName="keystone-cron" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.133827 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe60214-3673-4c3b-a043-ee483870fe48" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.133845 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" containerName="keystone-cron" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.134706 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.138636 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.138955 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.138981 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.139040 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.145207 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf"] Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.275497 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.275587 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h7g2\" (UniqueName: \"kubernetes.io/projected/e460efcc-55a7-4c68-9c14-91009dee948b-kube-api-access-5h7g2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.275724 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.377397 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h7g2\" (UniqueName: \"kubernetes.io/projected/e460efcc-55a7-4c68-9c14-91009dee948b-kube-api-access-5h7g2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.377541 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.377661 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.383815 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.384427 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.399919 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h7g2\" (UniqueName: \"kubernetes.io/projected/e460efcc-55a7-4c68-9c14-91009dee948b-kube-api-access-5h7g2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.449810 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:23 crc kubenswrapper[4932]: I0218 20:02:23.005034 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf"] Feb 18 20:02:23 crc kubenswrapper[4932]: I0218 20:02:23.011152 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:02:23 crc kubenswrapper[4932]: I0218 20:02:23.050487 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" event={"ID":"e460efcc-55a7-4c68-9c14-91009dee948b","Type":"ContainerStarted","Data":"56c1cc5eef2cc64c63813c104a79ed3516a686af7cfcd28574f92635e466d803"} Feb 18 20:02:24 crc kubenswrapper[4932]: I0218 20:02:24.061461 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" event={"ID":"e460efcc-55a7-4c68-9c14-91009dee948b","Type":"ContainerStarted","Data":"30c66c47bf8249b3b27644ba8768bf427241f9daf4bcbee4fae5c4b1e9538966"} Feb 18 20:02:24 crc kubenswrapper[4932]: I0218 20:02:24.078058 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" podStartSLOduration=1.521745477 podStartE2EDuration="2.078038952s" podCreationTimestamp="2026-02-18 20:02:22 +0000 UTC" firstStartedPulling="2026-02-18 20:02:23.010934344 +0000 UTC m=+1706.592889189" lastFinishedPulling="2026-02-18 20:02:23.567227799 +0000 UTC m=+1707.149182664" observedRunningTime="2026-02-18 20:02:24.076031012 +0000 UTC m=+1707.657985857" watchObservedRunningTime="2026-02-18 20:02:24.078038952 +0000 UTC m=+1707.659993797" Feb 18 20:02:31 crc kubenswrapper[4932]: I0218 20:02:31.179943 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:02:31 crc kubenswrapper[4932]: E0218 20:02:31.180796 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:02:39 crc kubenswrapper[4932]: I0218 20:02:39.049950 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zhvln"] Feb 18 20:02:39 crc kubenswrapper[4932]: I0218 20:02:39.060501 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zhvln"] Feb 18 20:02:39 crc kubenswrapper[4932]: I0218 20:02:39.190234 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64352a4d-f3af-44e1-b1d7-cc5e125de560" path="/var/lib/kubelet/pods/64352a4d-f3af-44e1-b1d7-cc5e125de560/volumes" Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.047193 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-js74w"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.065557 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bd21-account-create-update-kcn9v"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.081233 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-rw8qr"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.090913 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e952-account-create-update-jjrs6"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.105301 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bd21-account-create-update-kcn9v"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.128272 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-5833-account-create-update-fxm2t"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.146424 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e952-account-create-update-jjrs6"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.165242 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-js74w"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.176855 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-5833-account-create-update-fxm2t"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.186187 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-rw8qr"] Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.040352 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-vtbzd"] Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.056397 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-734d-account-create-update-stk6x"] Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.070507 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-vtbzd"] Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.082827 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-734d-account-create-update-stk6x"] Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.191594 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02bb1c31-7377-432f-8434-72981200f1ac" path="/var/lib/kubelet/pods/02bb1c31-7377-432f-8434-72981200f1ac/volumes" Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.192685 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26bd1cb1-1dcb-460e-ba19-eb8bef1951b5" path="/var/lib/kubelet/pods/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5/volumes" Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.193719 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35590261-332c-47e0-89e9-4eef3fd36086" path="/var/lib/kubelet/pods/35590261-332c-47e0-89e9-4eef3fd36086/volumes" Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.194833 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56349fdd-8b87-4910-b182-555b5913d5ee" path="/var/lib/kubelet/pods/56349fdd-8b87-4910-b182-555b5913d5ee/volumes" Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.196320 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fa1fef8-5a2e-4518-8641-d4b594fc29a3" path="/var/lib/kubelet/pods/7fa1fef8-5a2e-4518-8641-d4b594fc29a3/volumes" Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.197149 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec590bc-e2ef-49e0-80be-27af6f69aa06" path="/var/lib/kubelet/pods/bec590bc-e2ef-49e0-80be-27af6f69aa06/volumes" Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.198097 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4c8a6a6-4944-4c6f-be98-9dde833b89e5" path="/var/lib/kubelet/pods/c4c8a6a6-4944-4c6f-be98-9dde833b89e5/volumes" Feb 18 20:02:43 crc kubenswrapper[4932]: I0218 20:02:43.179683 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:02:43 crc kubenswrapper[4932]: E0218 20:02:43.180897 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:02:54 crc kubenswrapper[4932]: I0218 20:02:54.179811 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:02:54 crc kubenswrapper[4932]: E0218 20:02:54.182365 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:03:01 crc kubenswrapper[4932]: I0218 20:03:01.042769 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xbdgt"] Feb 18 20:03:01 crc kubenswrapper[4932]: I0218 20:03:01.056188 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xbdgt"] Feb 18 20:03:01 crc kubenswrapper[4932]: I0218 20:03:01.196975 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb4a050-ebc6-4319-b27f-9c9cce058ec1" path="/var/lib/kubelet/pods/3eb4a050-ebc6-4319-b27f-9c9cce058ec1/volumes" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.495297 4932 scope.go:117] "RemoveContainer" containerID="3a33312c61bc35aede7b854947f5cacef494c07faca9fd46ae2f217a195bc457" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.527936 4932 scope.go:117] "RemoveContainer" containerID="3ce1a237abcba8eb5dacdaaf6767d6692224b8089fbea09e0b1408de503e1b1a" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.586885 4932 scope.go:117] "RemoveContainer" containerID="95e00440e590eb387c9cf8e2e2f9778a04bbe9e0e014879d57139cdcea3fd2d4" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.632139 4932 scope.go:117] "RemoveContainer" containerID="5e6b5516d234b57d2f859d33d51d54c0aee524d02399dad696a4642cf7cceb8a" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.672646 4932 scope.go:117] "RemoveContainer" containerID="eeb81a13449459a4c7d2237c075a2110a61a815c3e8cc4a439843e5121373f28" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.718653 4932 scope.go:117] "RemoveContainer" containerID="38fb496c61ec368b9f0d3847ea90156e96e96daa825692bcb6b0867b238ef4ee" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.761726 4932 scope.go:117] "RemoveContainer" containerID="2cfcad461c33bcb694d12209c0cb7b72420cbc06fd09263f1f26b50ea451f974" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.782381 4932 scope.go:117] "RemoveContainer" containerID="3b567de8b4f1ae33989815fad19a6d8b9f69d7df099f4fd8ff235740848c1cc0" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.806488 4932 scope.go:117] "RemoveContainer" containerID="439c6cd70d2e38e21f55a810c1fb66ab1e1dc66541977f85b2ca4f91d6caf61b" Feb 18 20:03:06 crc kubenswrapper[4932]: I0218 20:03:06.038394 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-hvt6h"] Feb 18 20:03:06 crc kubenswrapper[4932]: I0218 20:03:06.047657 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-hvt6h"] Feb 18 20:03:06 crc kubenswrapper[4932]: I0218 20:03:06.179800 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:03:06 crc kubenswrapper[4932]: E0218 20:03:06.180361 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.036531 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-hn6qq"] Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.050942 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a65d-account-create-update-chx2v"] Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.061947 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a65d-account-create-update-chx2v"] Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.070733 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-hn6qq"] Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.078699 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5bd9-account-create-update-7tv8h"] Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.085984 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5bd9-account-create-update-7tv8h"] Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.191275 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56734660-55cc-463c-89f2-131bc9109dab" path="/var/lib/kubelet/pods/56734660-55cc-463c-89f2-131bc9109dab/volumes" Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.191845 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7680bf6b-efd6-452a-8900-09cf55b203ff" path="/var/lib/kubelet/pods/7680bf6b-efd6-452a-8900-09cf55b203ff/volumes" Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.192442 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac9c39c2-bf9e-4f11-b37f-17089fce08e7" path="/var/lib/kubelet/pods/ac9c39c2-bf9e-4f11-b37f-17089fce08e7/volumes" Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.193028 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7988cea-6aa8-4552-8965-04b417c91831" path="/var/lib/kubelet/pods/f7988cea-6aa8-4552-8965-04b417c91831/volumes" Feb 18 20:03:12 crc kubenswrapper[4932]: I0218 20:03:12.039907 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-hbs76"] Feb 18 20:03:12 crc kubenswrapper[4932]: I0218 20:03:12.052284 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-53f4-account-create-update-mh2bq"] Feb 18 20:03:12 crc kubenswrapper[4932]: I0218 20:03:12.061187 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-53f4-account-create-update-mh2bq"] Feb 18 20:03:12 crc kubenswrapper[4932]: I0218 20:03:12.073521 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-hbs76"] Feb 18 20:03:13 crc kubenswrapper[4932]: I0218 20:03:13.192647 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b9deee6-7804-492e-88c9-147087152416" path="/var/lib/kubelet/pods/0b9deee6-7804-492e-88c9-147087152416/volumes" Feb 18 20:03:13 crc kubenswrapper[4932]: I0218 20:03:13.195044 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca3578cc-7bd4-4e77-8b29-bbb38f588260" path="/var/lib/kubelet/pods/ca3578cc-7bd4-4e77-8b29-bbb38f588260/volumes" Feb 18 20:03:17 crc kubenswrapper[4932]: I0218 20:03:17.035643 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-rl7xx"] Feb 18 20:03:17 crc kubenswrapper[4932]: I0218 20:03:17.048457 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-rl7xx"] Feb 18 20:03:17 crc kubenswrapper[4932]: I0218 20:03:17.196697 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bbf2873-6ca9-4569-b5b6-3003511c02ba" path="/var/lib/kubelet/pods/1bbf2873-6ca9-4569-b5b6-3003511c02ba/volumes" Feb 18 20:03:21 crc kubenswrapper[4932]: I0218 20:03:21.179274 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:03:21 crc kubenswrapper[4932]: E0218 20:03:21.180412 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:03:24 crc kubenswrapper[4932]: I0218 20:03:24.072477 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-4ghxf"] Feb 18 20:03:24 crc kubenswrapper[4932]: I0218 20:03:24.094444 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-4ghxf"] Feb 18 20:03:25 crc kubenswrapper[4932]: I0218 20:03:25.051024 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-h526s"] Feb 18 20:03:25 crc kubenswrapper[4932]: I0218 20:03:25.062466 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-h526s"] Feb 18 20:03:25 crc kubenswrapper[4932]: I0218 20:03:25.204027 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" path="/var/lib/kubelet/pods/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a/volumes" Feb 18 20:03:25 crc kubenswrapper[4932]: I0218 20:03:25.204668 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc05154b-7f25-4fb1-8293-9aba06523c37" path="/var/lib/kubelet/pods/bc05154b-7f25-4fb1-8293-9aba06523c37/volumes" Feb 18 20:03:34 crc kubenswrapper[4932]: I0218 20:03:34.179417 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:03:34 crc kubenswrapper[4932]: E0218 20:03:34.180410 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:03:48 crc kubenswrapper[4932]: I0218 20:03:48.179437 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:03:48 crc kubenswrapper[4932]: E0218 20:03:48.180486 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:04:01 crc kubenswrapper[4932]: I0218 20:04:01.179735 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:04:01 crc kubenswrapper[4932]: E0218 20:04:01.183231 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:04:02 crc kubenswrapper[4932]: I0218 20:04:02.312690 4932 generic.go:334] "Generic (PLEG): container finished" podID="e460efcc-55a7-4c68-9c14-91009dee948b" containerID="30c66c47bf8249b3b27644ba8768bf427241f9daf4bcbee4fae5c4b1e9538966" exitCode=0 Feb 18 20:04:02 crc kubenswrapper[4932]: I0218 20:04:02.312791 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" event={"ID":"e460efcc-55a7-4c68-9c14-91009dee948b","Type":"ContainerDied","Data":"30c66c47bf8249b3b27644ba8768bf427241f9daf4bcbee4fae5c4b1e9538966"} Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.741756 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.880078 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-ssh-key-openstack-edpm-ipam\") pod \"e460efcc-55a7-4c68-9c14-91009dee948b\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.880222 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h7g2\" (UniqueName: \"kubernetes.io/projected/e460efcc-55a7-4c68-9c14-91009dee948b-kube-api-access-5h7g2\") pod \"e460efcc-55a7-4c68-9c14-91009dee948b\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.880437 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-inventory\") pod \"e460efcc-55a7-4c68-9c14-91009dee948b\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.886563 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e460efcc-55a7-4c68-9c14-91009dee948b-kube-api-access-5h7g2" (OuterVolumeSpecName: "kube-api-access-5h7g2") pod "e460efcc-55a7-4c68-9c14-91009dee948b" (UID: "e460efcc-55a7-4c68-9c14-91009dee948b"). InnerVolumeSpecName "kube-api-access-5h7g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.931230 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e460efcc-55a7-4c68-9c14-91009dee948b" (UID: "e460efcc-55a7-4c68-9c14-91009dee948b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.932193 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-inventory" (OuterVolumeSpecName: "inventory") pod "e460efcc-55a7-4c68-9c14-91009dee948b" (UID: "e460efcc-55a7-4c68-9c14-91009dee948b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.978292 4932 scope.go:117] "RemoveContainer" containerID="2dcf1d051e29c868ab7c7db13dbafa7710ab23c52dd39329f8dbfbb2b5ea9459" Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.984294 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.984323 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h7g2\" (UniqueName: \"kubernetes.io/projected/e460efcc-55a7-4c68-9c14-91009dee948b-kube-api-access-5h7g2\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.984338 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.052427 4932 scope.go:117] "RemoveContainer" containerID="48cdc7bd0a5fa5affdc3d044cfe0ccc940cdde09dc40fd9f4253e5cd4c996f16" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.074747 4932 scope.go:117] "RemoveContainer" containerID="03cc21b056f77810add58b5621bb79299b2f95efe33228e5665e27461f3e50f3" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.098085 4932 scope.go:117] "RemoveContainer" containerID="4abb236d79cc8592182059a25a1bc35aaa2d4ae1b8716c7469a32147843e50a4" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.121195 4932 scope.go:117] "RemoveContainer" containerID="979fb0febd6062fe5161812c56f74561bb0c81dc6ed2e8e26cb348d3275186d6" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.145936 4932 scope.go:117] "RemoveContainer" containerID="104353923ef97f2e6933dbfcfbfc2a9125473f1373667e2eb5163afb4316da88" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.191712 4932 scope.go:117] "RemoveContainer" containerID="2a62cc7c92b0f61fc993f04377e1428679cd22afc955b4da72b0e6e2d00eb682" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.214109 4932 scope.go:117] "RemoveContainer" containerID="e723a55a533327bae796eda64399cc0b1ee1750e65068515a7e5625e2f091ec4" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.257986 4932 scope.go:117] "RemoveContainer" containerID="0d634b73a958b2e21485770f0ca87b0cc9a8038deca230cf324c0047e0c7f89e" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.342243 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.342425 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" event={"ID":"e460efcc-55a7-4c68-9c14-91009dee948b","Type":"ContainerDied","Data":"56c1cc5eef2cc64c63813c104a79ed3516a686af7cfcd28574f92635e466d803"} Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.342482 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56c1cc5eef2cc64c63813c104a79ed3516a686af7cfcd28574f92635e466d803" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.418511 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx"] Feb 18 20:04:04 crc kubenswrapper[4932]: E0218 20:04:04.419361 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e460efcc-55a7-4c68-9c14-91009dee948b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.419438 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="e460efcc-55a7-4c68-9c14-91009dee948b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.419707 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="e460efcc-55a7-4c68-9c14-91009dee948b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.420409 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.422861 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.423165 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.423248 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.432165 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.455599 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx"] Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.493355 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5574d\" (UniqueName: \"kubernetes.io/projected/12f764db-8a47-4554-bea3-c71b6663cdec-kube-api-access-5574d\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.493408 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.493447 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.595876 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.595939 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.596086 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5574d\" (UniqueName: \"kubernetes.io/projected/12f764db-8a47-4554-bea3-c71b6663cdec-kube-api-access-5574d\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.600706 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.600751 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.622752 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5574d\" (UniqueName: \"kubernetes.io/projected/12f764db-8a47-4554-bea3-c71b6663cdec-kube-api-access-5574d\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.740753 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:05 crc kubenswrapper[4932]: I0218 20:04:05.764065 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx"] Feb 18 20:04:06 crc kubenswrapper[4932]: I0218 20:04:06.635813 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" event={"ID":"12f764db-8a47-4554-bea3-c71b6663cdec","Type":"ContainerStarted","Data":"7b6cbe66b4880567e8856647b80588deb9823126d1f35e4825ba3e7a73a88b7a"} Feb 18 20:04:06 crc kubenswrapper[4932]: I0218 20:04:06.635877 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" event={"ID":"12f764db-8a47-4554-bea3-c71b6663cdec","Type":"ContainerStarted","Data":"1d9a351b5e32ee2106112cc0bb1d8becea3a0f04cb2a76228d9bf8ba749d8d89"} Feb 18 20:04:06 crc kubenswrapper[4932]: I0218 20:04:06.658755 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" podStartSLOduration=2.152086411 podStartE2EDuration="2.65873351s" podCreationTimestamp="2026-02-18 20:04:04 +0000 UTC" firstStartedPulling="2026-02-18 20:04:05.772405034 +0000 UTC m=+1809.354359879" lastFinishedPulling="2026-02-18 20:04:06.279052123 +0000 UTC m=+1809.861006978" observedRunningTime="2026-02-18 20:04:06.650321351 +0000 UTC m=+1810.232276206" watchObservedRunningTime="2026-02-18 20:04:06.65873351 +0000 UTC m=+1810.240688365" Feb 18 20:04:07 crc kubenswrapper[4932]: I0218 20:04:07.048547 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vldrp"] Feb 18 20:04:07 crc kubenswrapper[4932]: I0218 20:04:07.057264 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vldrp"] Feb 18 20:04:07 crc kubenswrapper[4932]: I0218 20:04:07.065202 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-df7zx"] Feb 18 20:04:07 crc kubenswrapper[4932]: I0218 20:04:07.072483 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-df7zx"] Feb 18 20:04:07 crc kubenswrapper[4932]: I0218 20:04:07.219963 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" path="/var/lib/kubelet/pods/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd/volumes" Feb 18 20:04:07 crc kubenswrapper[4932]: I0218 20:04:07.220691 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30efc86e-0c26-42e4-b907-1d4d985912ed" path="/var/lib/kubelet/pods/30efc86e-0c26-42e4-b907-1d4d985912ed/volumes" Feb 18 20:04:12 crc kubenswrapper[4932]: I0218 20:04:12.180743 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:04:12 crc kubenswrapper[4932]: E0218 20:04:12.181740 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:04:19 crc kubenswrapper[4932]: I0218 20:04:19.049960 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-cpzcj"] Feb 18 20:04:19 crc kubenswrapper[4932]: I0218 20:04:19.068660 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-cpzcj"] Feb 18 20:04:19 crc kubenswrapper[4932]: I0218 20:04:19.201363 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43f771cb-173f-4939-b1d1-e7d1b21834cb" path="/var/lib/kubelet/pods/43f771cb-173f-4939-b1d1-e7d1b21834cb/volumes" Feb 18 20:04:20 crc kubenswrapper[4932]: I0218 20:04:20.046857 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-kfzmp"] Feb 18 20:04:20 crc kubenswrapper[4932]: I0218 20:04:20.062418 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-kfzmp"] Feb 18 20:04:21 crc kubenswrapper[4932]: I0218 20:04:21.037523 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-nqxxn"] Feb 18 20:04:21 crc kubenswrapper[4932]: I0218 20:04:21.045888 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-nqxxn"] Feb 18 20:04:21 crc kubenswrapper[4932]: I0218 20:04:21.194043 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f831817-b833-4ee3-b1e9-77d9c02416ed" path="/var/lib/kubelet/pods/3f831817-b833-4ee3-b1e9-77d9c02416ed/volumes" Feb 18 20:04:21 crc kubenswrapper[4932]: I0218 20:04:21.195259 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4c20fc2-cf78-41c9-9e37-c5bea35d472f" path="/var/lib/kubelet/pods/c4c20fc2-cf78-41c9-9e37-c5bea35d472f/volumes" Feb 18 20:04:26 crc kubenswrapper[4932]: I0218 20:04:26.180282 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:04:26 crc kubenswrapper[4932]: E0218 20:04:26.181449 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:04:38 crc kubenswrapper[4932]: I0218 20:04:38.179030 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:04:38 crc kubenswrapper[4932]: E0218 20:04:38.179823 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:04:49 crc kubenswrapper[4932]: I0218 20:04:49.179962 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:04:49 crc kubenswrapper[4932]: E0218 20:04:49.180960 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:04:52 crc kubenswrapper[4932]: I0218 20:04:52.043390 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a786-account-create-update-jrb5b"] Feb 18 20:04:52 crc kubenswrapper[4932]: I0218 20:04:52.059596 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a786-account-create-update-jrb5b"] Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.076270 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-2fd4-account-create-update-s9r68"] Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.085899 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qlt9g"] Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.095325 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-zxht6"] Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.104708 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-2fd4-account-create-update-s9r68"] Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.117109 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-zxht6"] Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.128645 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qlt9g"] Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.195469 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20264fab-dfb6-4e8c-90c3-755f6877b798" path="/var/lib/kubelet/pods/20264fab-dfb6-4e8c-90c3-755f6877b798/volumes" Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.196543 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6ae5264-a3f4-4f05-b7ff-942b182ee6e6" path="/var/lib/kubelet/pods/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6/volumes" Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.197522 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec70d32-3fdc-410f-9d9d-9b108e079cfe" path="/var/lib/kubelet/pods/aec70d32-3fdc-410f-9d9d-9b108e079cfe/volumes" Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.198368 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc8867f-cb56-47ad-9d08-a25feca678fc" path="/var/lib/kubelet/pods/ccc8867f-cb56-47ad-9d08-a25feca678fc/volumes" Feb 18 20:04:54 crc kubenswrapper[4932]: I0218 20:04:54.029739 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5405-account-create-update-8fjff"] Feb 18 20:04:54 crc kubenswrapper[4932]: I0218 20:04:54.041207 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xdsn5"] Feb 18 20:04:54 crc kubenswrapper[4932]: I0218 20:04:54.054093 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5405-account-create-update-8fjff"] Feb 18 20:04:54 crc kubenswrapper[4932]: I0218 20:04:54.066374 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xdsn5"] Feb 18 20:04:55 crc kubenswrapper[4932]: I0218 20:04:55.203046 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7703d71c-4ee9-4495-ab74-0a76c148d377" path="/var/lib/kubelet/pods/7703d71c-4ee9-4495-ab74-0a76c148d377/volumes" Feb 18 20:04:55 crc kubenswrapper[4932]: I0218 20:04:55.205646 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b44b5c9c-2c44-4e46-a14f-a8a0c93781d3" path="/var/lib/kubelet/pods/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3/volumes" Feb 18 20:05:03 crc kubenswrapper[4932]: I0218 20:05:03.180139 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.268593 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"93b2aadde96a1cb53f394f160a8c65ff537540cf335aacf73c90625c7fb96dd4"} Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.486163 4932 scope.go:117] "RemoveContainer" containerID="35671852602ab05670d4f45f3855e4d52f08702c9d127db3894e27656cb622ec" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.528458 4932 scope.go:117] "RemoveContainer" containerID="a9bd3203306587d945952a2d8b8a38aa992a6b26567d9b7e7b075edf3005412d" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.568622 4932 scope.go:117] "RemoveContainer" containerID="561bed36cff9fe4632c1003655b4ef598d4e8ea47f27f52a6c7b3f87e135ec7f" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.632735 4932 scope.go:117] "RemoveContainer" containerID="80213ebfed248f23a59e2cc3d7242b684303a348ef8453068ab05718b9f4df29" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.682275 4932 scope.go:117] "RemoveContainer" containerID="e02396c72df7f91c2b9a6adb3ff52d02133d145e009ed0755b0356a1da74ee73" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.728313 4932 scope.go:117] "RemoveContainer" containerID="49092cff964806110781a1ce6f40a2126d58bcb45c2544f984759951802714c3" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.763471 4932 scope.go:117] "RemoveContainer" containerID="d60abba7265ba14494902810d1153e145d30148ef253f739d8bb7a9a9675f1f8" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.791226 4932 scope.go:117] "RemoveContainer" containerID="5ccb855943775d6e9adaf49444e172677634a8b560d436edfff1c39a86a31e48" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.817288 4932 scope.go:117] "RemoveContainer" containerID="2eac601de5fc1220879b1962da46431b85d3f67bca44ebc6031ccc59809d3f58" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.841680 4932 scope.go:117] "RemoveContainer" containerID="682f69e31fcb10c9b585e4fbecb1e2d4f8e82e3ec0c03204e9e0fefc1d901753" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.876986 4932 scope.go:117] "RemoveContainer" containerID="708bd68c17f2cb8bb6aefdb45fc9ab2a2b088e8be75ba3d7c52b1b8b365c0f1f" Feb 18 20:05:19 crc kubenswrapper[4932]: I0218 20:05:19.428793 4932 generic.go:334] "Generic (PLEG): container finished" podID="12f764db-8a47-4554-bea3-c71b6663cdec" containerID="7b6cbe66b4880567e8856647b80588deb9823126d1f35e4825ba3e7a73a88b7a" exitCode=0 Feb 18 20:05:19 crc kubenswrapper[4932]: I0218 20:05:19.428905 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" event={"ID":"12f764db-8a47-4554-bea3-c71b6663cdec","Type":"ContainerDied","Data":"7b6cbe66b4880567e8856647b80588deb9823126d1f35e4825ba3e7a73a88b7a"} Feb 18 20:05:20 crc kubenswrapper[4932]: I0218 20:05:20.880604 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.060050 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-inventory\") pod \"12f764db-8a47-4554-bea3-c71b6663cdec\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.060167 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-ssh-key-openstack-edpm-ipam\") pod \"12f764db-8a47-4554-bea3-c71b6663cdec\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.060515 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5574d\" (UniqueName: \"kubernetes.io/projected/12f764db-8a47-4554-bea3-c71b6663cdec-kube-api-access-5574d\") pod \"12f764db-8a47-4554-bea3-c71b6663cdec\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.068848 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12f764db-8a47-4554-bea3-c71b6663cdec-kube-api-access-5574d" (OuterVolumeSpecName: "kube-api-access-5574d") pod "12f764db-8a47-4554-bea3-c71b6663cdec" (UID: "12f764db-8a47-4554-bea3-c71b6663cdec"). InnerVolumeSpecName "kube-api-access-5574d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.090371 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-inventory" (OuterVolumeSpecName: "inventory") pod "12f764db-8a47-4554-bea3-c71b6663cdec" (UID: "12f764db-8a47-4554-bea3-c71b6663cdec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.091142 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "12f764db-8a47-4554-bea3-c71b6663cdec" (UID: "12f764db-8a47-4554-bea3-c71b6663cdec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.162792 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5574d\" (UniqueName: \"kubernetes.io/projected/12f764db-8a47-4554-bea3-c71b6663cdec-kube-api-access-5574d\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.162827 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.162837 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.447258 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" event={"ID":"12f764db-8a47-4554-bea3-c71b6663cdec","Type":"ContainerDied","Data":"1d9a351b5e32ee2106112cc0bb1d8becea3a0f04cb2a76228d9bf8ba749d8d89"} Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.447300 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d9a351b5e32ee2106112cc0bb1d8becea3a0f04cb2a76228d9bf8ba749d8d89" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.447299 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.526752 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq"] Feb 18 20:05:21 crc kubenswrapper[4932]: E0218 20:05:21.527398 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f764db-8a47-4554-bea3-c71b6663cdec" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.527421 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f764db-8a47-4554-bea3-c71b6663cdec" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.527643 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="12f764db-8a47-4554-bea3-c71b6663cdec" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.528400 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.531295 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.531351 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.531407 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.531617 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.535639 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq"] Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.673197 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq9bk\" (UniqueName: \"kubernetes.io/projected/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-kube-api-access-pq9bk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.673305 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.673547 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.775855 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.776509 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq9bk\" (UniqueName: \"kubernetes.io/projected/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-kube-api-access-pq9bk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.776584 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.782426 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.782441 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.805119 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq9bk\" (UniqueName: \"kubernetes.io/projected/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-kube-api-access-pq9bk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.853712 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:22 crc kubenswrapper[4932]: I0218 20:05:22.406603 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq"] Feb 18 20:05:22 crc kubenswrapper[4932]: I0218 20:05:22.461291 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" event={"ID":"dd9738d8-59a2-4c1a-b9af-58d1f7efd947","Type":"ContainerStarted","Data":"1813233506b02ea3afb1cbb753a84e217bbaf1d4c7c50db6b62f9230f4b4f44a"} Feb 18 20:05:23 crc kubenswrapper[4932]: I0218 20:05:23.472687 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" event={"ID":"dd9738d8-59a2-4c1a-b9af-58d1f7efd947","Type":"ContainerStarted","Data":"dacc67acdb626da274e85e8df5c7e1e59e99d2547879ff21d240d39c66eeabdd"} Feb 18 20:05:23 crc kubenswrapper[4932]: I0218 20:05:23.503145 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" podStartSLOduration=2.020672416 podStartE2EDuration="2.503076505s" podCreationTimestamp="2026-02-18 20:05:21 +0000 UTC" firstStartedPulling="2026-02-18 20:05:22.392252212 +0000 UTC m=+1885.974207077" lastFinishedPulling="2026-02-18 20:05:22.874656291 +0000 UTC m=+1886.456611166" observedRunningTime="2026-02-18 20:05:23.491855367 +0000 UTC m=+1887.073810232" watchObservedRunningTime="2026-02-18 20:05:23.503076505 +0000 UTC m=+1887.085031360" Feb 18 20:05:28 crc kubenswrapper[4932]: I0218 20:05:28.526953 4932 generic.go:334] "Generic (PLEG): container finished" podID="dd9738d8-59a2-4c1a-b9af-58d1f7efd947" containerID="dacc67acdb626da274e85e8df5c7e1e59e99d2547879ff21d240d39c66eeabdd" exitCode=0 Feb 18 20:05:28 crc kubenswrapper[4932]: I0218 20:05:28.527025 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" event={"ID":"dd9738d8-59a2-4c1a-b9af-58d1f7efd947","Type":"ContainerDied","Data":"dacc67acdb626da274e85e8df5c7e1e59e99d2547879ff21d240d39c66eeabdd"} Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.033003 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.056700 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-inventory\") pod \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.056894 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-ssh-key-openstack-edpm-ipam\") pod \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.056921 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq9bk\" (UniqueName: \"kubernetes.io/projected/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-kube-api-access-pq9bk\") pod \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.065525 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-kube-api-access-pq9bk" (OuterVolumeSpecName: "kube-api-access-pq9bk") pod "dd9738d8-59a2-4c1a-b9af-58d1f7efd947" (UID: "dd9738d8-59a2-4c1a-b9af-58d1f7efd947"). InnerVolumeSpecName "kube-api-access-pq9bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.095326 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-inventory" (OuterVolumeSpecName: "inventory") pod "dd9738d8-59a2-4c1a-b9af-58d1f7efd947" (UID: "dd9738d8-59a2-4c1a-b9af-58d1f7efd947"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.095378 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd9738d8-59a2-4c1a-b9af-58d1f7efd947" (UID: "dd9738d8-59a2-4c1a-b9af-58d1f7efd947"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.158825 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.158863 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq9bk\" (UniqueName: \"kubernetes.io/projected/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-kube-api-access-pq9bk\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.158874 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.544906 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" event={"ID":"dd9738d8-59a2-4c1a-b9af-58d1f7efd947","Type":"ContainerDied","Data":"1813233506b02ea3afb1cbb753a84e217bbaf1d4c7c50db6b62f9230f4b4f44a"} Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.544942 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1813233506b02ea3afb1cbb753a84e217bbaf1d4c7c50db6b62f9230f4b4f44a" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.544984 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.621419 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml"] Feb 18 20:05:30 crc kubenswrapper[4932]: E0218 20:05:30.621798 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9738d8-59a2-4c1a-b9af-58d1f7efd947" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.621816 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9738d8-59a2-4c1a-b9af-58d1f7efd947" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.622030 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd9738d8-59a2-4c1a-b9af-58d1f7efd947" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.622680 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.624616 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.625044 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.625261 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.625951 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.634897 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml"] Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.668649 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.668750 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.668772 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjsrp\" (UniqueName: \"kubernetes.io/projected/a3390076-ebf5-4856-9646-e7f82a4b5f28-kube-api-access-fjsrp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.770976 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.771045 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.771079 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjsrp\" (UniqueName: \"kubernetes.io/projected/a3390076-ebf5-4856-9646-e7f82a4b5f28-kube-api-access-fjsrp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.775593 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.775999 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.799608 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjsrp\" (UniqueName: \"kubernetes.io/projected/a3390076-ebf5-4856-9646-e7f82a4b5f28-kube-api-access-fjsrp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.975945 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:31 crc kubenswrapper[4932]: I0218 20:05:31.525165 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml"] Feb 18 20:05:31 crc kubenswrapper[4932]: I0218 20:05:31.576057 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" event={"ID":"a3390076-ebf5-4856-9646-e7f82a4b5f28","Type":"ContainerStarted","Data":"d9e23bb9221f30e4c904e27e398042c535ab54fd9e12565fb3207e201a576c68"} Feb 18 20:05:32 crc kubenswrapper[4932]: I0218 20:05:32.591090 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" event={"ID":"a3390076-ebf5-4856-9646-e7f82a4b5f28","Type":"ContainerStarted","Data":"50bc0af33067e1ceb9abed0771cb3bdb1fe3c9bb9acc856016812709bcf5281d"} Feb 18 20:05:32 crc kubenswrapper[4932]: I0218 20:05:32.618390 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" podStartSLOduration=2.119119346 podStartE2EDuration="2.618372692s" podCreationTimestamp="2026-02-18 20:05:30 +0000 UTC" firstStartedPulling="2026-02-18 20:05:31.539798957 +0000 UTC m=+1895.121753802" lastFinishedPulling="2026-02-18 20:05:32.039052293 +0000 UTC m=+1895.621007148" observedRunningTime="2026-02-18 20:05:32.608560989 +0000 UTC m=+1896.190515854" watchObservedRunningTime="2026-02-18 20:05:32.618372692 +0000 UTC m=+1896.200327537" Feb 18 20:05:38 crc kubenswrapper[4932]: I0218 20:05:38.068395 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-64b8m"] Feb 18 20:05:38 crc kubenswrapper[4932]: I0218 20:05:38.082607 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-64b8m"] Feb 18 20:05:39 crc kubenswrapper[4932]: I0218 20:05:39.193232 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c88334ec-64f6-41ba-aee5-d5323e8c0c25" path="/var/lib/kubelet/pods/c88334ec-64f6-41ba-aee5-d5323e8c0c25/volumes" Feb 18 20:06:03 crc kubenswrapper[4932]: I0218 20:06:03.054252 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-xlzdb"] Feb 18 20:06:03 crc kubenswrapper[4932]: I0218 20:06:03.079573 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-xlzdb"] Feb 18 20:06:03 crc kubenswrapper[4932]: I0218 20:06:03.189478 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6473c7ac-af7d-4556-aa86-28aabc85694a" path="/var/lib/kubelet/pods/6473c7ac-af7d-4556-aa86-28aabc85694a/volumes" Feb 18 20:06:05 crc kubenswrapper[4932]: I0218 20:06:05.071689 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f756w"] Feb 18 20:06:05 crc kubenswrapper[4932]: I0218 20:06:05.087504 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f756w"] Feb 18 20:06:05 crc kubenswrapper[4932]: I0218 20:06:05.166120 4932 scope.go:117] "RemoveContainer" containerID="4e7866a2ddd0a42f76d440fa6b1c16f63d3f4f13968f3f538f0dc810522b826b" Feb 18 20:06:05 crc kubenswrapper[4932]: I0218 20:06:05.189508 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d3a07cf-a084-46a0-8ca2-830e0838d575" path="/var/lib/kubelet/pods/5d3a07cf-a084-46a0-8ca2-830e0838d575/volumes" Feb 18 20:06:05 crc kubenswrapper[4932]: I0218 20:06:05.209372 4932 scope.go:117] "RemoveContainer" containerID="7ff7e9bf05a2ba3237ddc130003a316b61a512ddd8b5c858384cd739b41a1cfd" Feb 18 20:06:09 crc kubenswrapper[4932]: I0218 20:06:09.969090 4932 generic.go:334] "Generic (PLEG): container finished" podID="a3390076-ebf5-4856-9646-e7f82a4b5f28" containerID="50bc0af33067e1ceb9abed0771cb3bdb1fe3c9bb9acc856016812709bcf5281d" exitCode=0 Feb 18 20:06:09 crc kubenswrapper[4932]: I0218 20:06:09.969231 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" event={"ID":"a3390076-ebf5-4856-9646-e7f82a4b5f28","Type":"ContainerDied","Data":"50bc0af33067e1ceb9abed0771cb3bdb1fe3c9bb9acc856016812709bcf5281d"} Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.527356 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.652137 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-inventory\") pod \"a3390076-ebf5-4856-9646-e7f82a4b5f28\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.652355 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjsrp\" (UniqueName: \"kubernetes.io/projected/a3390076-ebf5-4856-9646-e7f82a4b5f28-kube-api-access-fjsrp\") pod \"a3390076-ebf5-4856-9646-e7f82a4b5f28\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.652490 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-ssh-key-openstack-edpm-ipam\") pod \"a3390076-ebf5-4856-9646-e7f82a4b5f28\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.657657 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3390076-ebf5-4856-9646-e7f82a4b5f28-kube-api-access-fjsrp" (OuterVolumeSpecName: "kube-api-access-fjsrp") pod "a3390076-ebf5-4856-9646-e7f82a4b5f28" (UID: "a3390076-ebf5-4856-9646-e7f82a4b5f28"). InnerVolumeSpecName "kube-api-access-fjsrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.689691 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-inventory" (OuterVolumeSpecName: "inventory") pod "a3390076-ebf5-4856-9646-e7f82a4b5f28" (UID: "a3390076-ebf5-4856-9646-e7f82a4b5f28"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.700357 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a3390076-ebf5-4856-9646-e7f82a4b5f28" (UID: "a3390076-ebf5-4856-9646-e7f82a4b5f28"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.754329 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjsrp\" (UniqueName: \"kubernetes.io/projected/a3390076-ebf5-4856-9646-e7f82a4b5f28-kube-api-access-fjsrp\") on node \"crc\" DevicePath \"\"" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.754370 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.754388 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.993920 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" event={"ID":"a3390076-ebf5-4856-9646-e7f82a4b5f28","Type":"ContainerDied","Data":"d9e23bb9221f30e4c904e27e398042c535ab54fd9e12565fb3207e201a576c68"} Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.993960 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9e23bb9221f30e4c904e27e398042c535ab54fd9e12565fb3207e201a576c68" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.994424 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.134155 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp"] Feb 18 20:06:12 crc kubenswrapper[4932]: E0218 20:06:12.134676 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3390076-ebf5-4856-9646-e7f82a4b5f28" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.134693 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3390076-ebf5-4856-9646-e7f82a4b5f28" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.134872 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3390076-ebf5-4856-9646-e7f82a4b5f28" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.135545 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.138237 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.138490 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.138716 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.138943 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.152357 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp"] Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.263894 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.264267 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.264326 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8vst\" (UniqueName: \"kubernetes.io/projected/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-kube-api-access-b8vst\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.365771 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.365896 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.365949 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8vst\" (UniqueName: \"kubernetes.io/projected/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-kube-api-access-b8vst\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.370762 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.371079 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.382783 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8vst\" (UniqueName: \"kubernetes.io/projected/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-kube-api-access-b8vst\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.492944 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:13 crc kubenswrapper[4932]: I0218 20:06:13.028347 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp"] Feb 18 20:06:14 crc kubenswrapper[4932]: I0218 20:06:14.024569 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" event={"ID":"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab","Type":"ContainerStarted","Data":"d5dcd19c405c09a09a5c129ab068a2aad22357b5c4e4a4481c35f8e392967a5c"} Feb 18 20:06:14 crc kubenswrapper[4932]: I0218 20:06:14.024962 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" event={"ID":"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab","Type":"ContainerStarted","Data":"93beb98692f78ee18e33929ce67ee50501cc2c2ec35b52dc31e5b3ff5a85e1d7"} Feb 18 20:06:14 crc kubenswrapper[4932]: I0218 20:06:14.050803 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" podStartSLOduration=1.647470701 podStartE2EDuration="2.050786403s" podCreationTimestamp="2026-02-18 20:06:12 +0000 UTC" firstStartedPulling="2026-02-18 20:06:13.03491345 +0000 UTC m=+1936.616868295" lastFinishedPulling="2026-02-18 20:06:13.438229152 +0000 UTC m=+1937.020183997" observedRunningTime="2026-02-18 20:06:14.045743728 +0000 UTC m=+1937.627698613" watchObservedRunningTime="2026-02-18 20:06:14.050786403 +0000 UTC m=+1937.632741248" Feb 18 20:06:47 crc kubenswrapper[4932]: I0218 20:06:47.056892 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-kf5w6"] Feb 18 20:06:47 crc kubenswrapper[4932]: I0218 20:06:47.071390 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-kf5w6"] Feb 18 20:06:47 crc kubenswrapper[4932]: I0218 20:06:47.193986 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="738744b3-86e1-432c-8380-0d428a2e8263" path="/var/lib/kubelet/pods/738744b3-86e1-432c-8380-0d428a2e8263/volumes" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.648690 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hp6v9"] Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.652474 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.662100 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hp6v9"] Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.776659 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sccvp\" (UniqueName: \"kubernetes.io/projected/9385117b-aef4-4fc9-9633-c237337beea2-kube-api-access-sccvp\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.777050 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-catalog-content\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.777110 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-utilities\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.880069 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sccvp\" (UniqueName: \"kubernetes.io/projected/9385117b-aef4-4fc9-9633-c237337beea2-kube-api-access-sccvp\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.880220 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-catalog-content\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.880552 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-utilities\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.880745 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-catalog-content\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.881268 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-utilities\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.911147 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sccvp\" (UniqueName: \"kubernetes.io/projected/9385117b-aef4-4fc9-9633-c237337beea2-kube-api-access-sccvp\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.989484 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:03 crc kubenswrapper[4932]: I0218 20:07:03.576211 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hp6v9"] Feb 18 20:07:04 crc kubenswrapper[4932]: I0218 20:07:04.533623 4932 generic.go:334] "Generic (PLEG): container finished" podID="9385117b-aef4-4fc9-9633-c237337beea2" containerID="2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da" exitCode=0 Feb 18 20:07:04 crc kubenswrapper[4932]: I0218 20:07:04.533704 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp6v9" event={"ID":"9385117b-aef4-4fc9-9633-c237337beea2","Type":"ContainerDied","Data":"2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da"} Feb 18 20:07:04 crc kubenswrapper[4932]: I0218 20:07:04.534125 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp6v9" event={"ID":"9385117b-aef4-4fc9-9633-c237337beea2","Type":"ContainerStarted","Data":"6ebce87b6b61ea80c41ffd1203cdea867546f05926dea452da2ed3b5a10dd57d"} Feb 18 20:07:04 crc kubenswrapper[4932]: I0218 20:07:04.537304 4932 generic.go:334] "Generic (PLEG): container finished" podID="6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab" containerID="d5dcd19c405c09a09a5c129ab068a2aad22357b5c4e4a4481c35f8e392967a5c" exitCode=0 Feb 18 20:07:04 crc kubenswrapper[4932]: I0218 20:07:04.537359 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" event={"ID":"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab","Type":"ContainerDied","Data":"d5dcd19c405c09a09a5c129ab068a2aad22357b5c4e4a4481c35f8e392967a5c"} Feb 18 20:07:05 crc kubenswrapper[4932]: I0218 20:07:05.543986 4932 scope.go:117] "RemoveContainer" containerID="e000a4553afc7ad7dbb58680bc4724da86a258372aee2e0c10f7e863173c5a10" Feb 18 20:07:05 crc kubenswrapper[4932]: I0218 20:07:05.547771 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp6v9" event={"ID":"9385117b-aef4-4fc9-9633-c237337beea2","Type":"ContainerStarted","Data":"11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2"} Feb 18 20:07:05 crc kubenswrapper[4932]: I0218 20:07:05.598041 4932 scope.go:117] "RemoveContainer" containerID="9bb9eedee5db3508051ad5cf9468f19b751623f5c59dfbe177da134d00b7fc1f" Feb 18 20:07:05 crc kubenswrapper[4932]: I0218 20:07:05.972731 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.045760 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-inventory\") pod \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.045893 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-ssh-key-openstack-edpm-ipam\") pod \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.046008 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8vst\" (UniqueName: \"kubernetes.io/projected/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-kube-api-access-b8vst\") pod \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.052574 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-kube-api-access-b8vst" (OuterVolumeSpecName: "kube-api-access-b8vst") pod "6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab" (UID: "6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab"). InnerVolumeSpecName "kube-api-access-b8vst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.076960 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab" (UID: "6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.077372 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-inventory" (OuterVolumeSpecName: "inventory") pod "6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab" (UID: "6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.149133 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.149711 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.149776 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8vst\" (UniqueName: \"kubernetes.io/projected/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-kube-api-access-b8vst\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.558586 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" event={"ID":"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab","Type":"ContainerDied","Data":"93beb98692f78ee18e33929ce67ee50501cc2c2ec35b52dc31e5b3ff5a85e1d7"} Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.558624 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93beb98692f78ee18e33929ce67ee50501cc2c2ec35b52dc31e5b3ff5a85e1d7" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.558691 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.563164 4932 generic.go:334] "Generic (PLEG): container finished" podID="9385117b-aef4-4fc9-9633-c237337beea2" containerID="11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2" exitCode=0 Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.563249 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp6v9" event={"ID":"9385117b-aef4-4fc9-9633-c237337beea2","Type":"ContainerDied","Data":"11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2"} Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.671574 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qp4lv"] Feb 18 20:07:06 crc kubenswrapper[4932]: E0218 20:07:06.672363 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.672450 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.672713 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.673629 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.676540 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.676714 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.677079 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.677468 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.679704 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qp4lv"] Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.764944 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.765016 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhhhw\" (UniqueName: \"kubernetes.io/projected/1f19857d-f085-411f-a08f-412d1173ed1c-kube-api-access-nhhhw\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.765048 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.866936 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.867038 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhhhw\" (UniqueName: \"kubernetes.io/projected/1f19857d-f085-411f-a08f-412d1173ed1c-kube-api-access-nhhhw\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.867085 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.871580 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.879647 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.884564 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhhhw\" (UniqueName: \"kubernetes.io/projected/1f19857d-f085-411f-a08f-412d1173ed1c-kube-api-access-nhhhw\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.998720 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:07 crc kubenswrapper[4932]: I0218 20:07:07.562481 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qp4lv"] Feb 18 20:07:07 crc kubenswrapper[4932]: I0218 20:07:07.577801 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp6v9" event={"ID":"9385117b-aef4-4fc9-9633-c237337beea2","Type":"ContainerStarted","Data":"f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346"} Feb 18 20:07:07 crc kubenswrapper[4932]: I0218 20:07:07.580157 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" event={"ID":"1f19857d-f085-411f-a08f-412d1173ed1c","Type":"ContainerStarted","Data":"b76d6ccafd2c0451bf7aa7ed91aceca0b2f73e23637c2896e2b6163faeb9266a"} Feb 18 20:07:07 crc kubenswrapper[4932]: I0218 20:07:07.599500 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hp6v9" podStartSLOduration=3.177628936 podStartE2EDuration="5.599482537s" podCreationTimestamp="2026-02-18 20:07:02 +0000 UTC" firstStartedPulling="2026-02-18 20:07:04.54872277 +0000 UTC m=+1988.130677625" lastFinishedPulling="2026-02-18 20:07:06.970576381 +0000 UTC m=+1990.552531226" observedRunningTime="2026-02-18 20:07:07.593920039 +0000 UTC m=+1991.175874894" watchObservedRunningTime="2026-02-18 20:07:07.599482537 +0000 UTC m=+1991.181437382" Feb 18 20:07:08 crc kubenswrapper[4932]: I0218 20:07:08.596155 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" event={"ID":"1f19857d-f085-411f-a08f-412d1173ed1c","Type":"ContainerStarted","Data":"cb369d03d8bcd330fee85bc5a8dacb1da44e035006df4f2345401aa8ff8cca9a"} Feb 18 20:07:08 crc kubenswrapper[4932]: I0218 20:07:08.624844 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" podStartSLOduration=2.219164514 podStartE2EDuration="2.624817734s" podCreationTimestamp="2026-02-18 20:07:06 +0000 UTC" firstStartedPulling="2026-02-18 20:07:07.556345019 +0000 UTC m=+1991.138299874" lastFinishedPulling="2026-02-18 20:07:07.961998229 +0000 UTC m=+1991.543953094" observedRunningTime="2026-02-18 20:07:08.609985047 +0000 UTC m=+1992.191939902" watchObservedRunningTime="2026-02-18 20:07:08.624817734 +0000 UTC m=+1992.206772599" Feb 18 20:07:12 crc kubenswrapper[4932]: I0218 20:07:12.989898 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:12 crc kubenswrapper[4932]: I0218 20:07:12.990331 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:13 crc kubenswrapper[4932]: I0218 20:07:13.060410 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:13 crc kubenswrapper[4932]: I0218 20:07:13.684417 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:13 crc kubenswrapper[4932]: I0218 20:07:13.741890 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hp6v9"] Feb 18 20:07:15 crc kubenswrapper[4932]: I0218 20:07:15.661448 4932 generic.go:334] "Generic (PLEG): container finished" podID="1f19857d-f085-411f-a08f-412d1173ed1c" containerID="cb369d03d8bcd330fee85bc5a8dacb1da44e035006df4f2345401aa8ff8cca9a" exitCode=0 Feb 18 20:07:15 crc kubenswrapper[4932]: I0218 20:07:15.661582 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" event={"ID":"1f19857d-f085-411f-a08f-412d1173ed1c","Type":"ContainerDied","Data":"cb369d03d8bcd330fee85bc5a8dacb1da44e035006df4f2345401aa8ff8cca9a"} Feb 18 20:07:15 crc kubenswrapper[4932]: I0218 20:07:15.662026 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hp6v9" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="registry-server" containerID="cri-o://f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346" gracePeriod=2 Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.161013 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.266354 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sccvp\" (UniqueName: \"kubernetes.io/projected/9385117b-aef4-4fc9-9633-c237337beea2-kube-api-access-sccvp\") pod \"9385117b-aef4-4fc9-9633-c237337beea2\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.266725 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-catalog-content\") pod \"9385117b-aef4-4fc9-9633-c237337beea2\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.266962 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-utilities\") pod \"9385117b-aef4-4fc9-9633-c237337beea2\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.267757 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-utilities" (OuterVolumeSpecName: "utilities") pod "9385117b-aef4-4fc9-9633-c237337beea2" (UID: "9385117b-aef4-4fc9-9633-c237337beea2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.268210 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.274531 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9385117b-aef4-4fc9-9633-c237337beea2-kube-api-access-sccvp" (OuterVolumeSpecName: "kube-api-access-sccvp") pod "9385117b-aef4-4fc9-9633-c237337beea2" (UID: "9385117b-aef4-4fc9-9633-c237337beea2"). InnerVolumeSpecName "kube-api-access-sccvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.317075 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9385117b-aef4-4fc9-9633-c237337beea2" (UID: "9385117b-aef4-4fc9-9633-c237337beea2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.370117 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sccvp\" (UniqueName: \"kubernetes.io/projected/9385117b-aef4-4fc9-9633-c237337beea2-kube-api-access-sccvp\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.370262 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.672152 4932 generic.go:334] "Generic (PLEG): container finished" podID="9385117b-aef4-4fc9-9633-c237337beea2" containerID="f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346" exitCode=0 Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.672233 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.672250 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp6v9" event={"ID":"9385117b-aef4-4fc9-9633-c237337beea2","Type":"ContainerDied","Data":"f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346"} Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.672505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp6v9" event={"ID":"9385117b-aef4-4fc9-9633-c237337beea2","Type":"ContainerDied","Data":"6ebce87b6b61ea80c41ffd1203cdea867546f05926dea452da2ed3b5a10dd57d"} Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.672524 4932 scope.go:117] "RemoveContainer" containerID="f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.721494 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hp6v9"] Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.721881 4932 scope.go:117] "RemoveContainer" containerID="11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.733475 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hp6v9"] Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.746519 4932 scope.go:117] "RemoveContainer" containerID="2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.805243 4932 scope.go:117] "RemoveContainer" containerID="f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346" Feb 18 20:07:16 crc kubenswrapper[4932]: E0218 20:07:16.805819 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346\": container with ID starting with f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346 not found: ID does not exist" containerID="f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.805883 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346"} err="failed to get container status \"f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346\": rpc error: code = NotFound desc = could not find container \"f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346\": container with ID starting with f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346 not found: ID does not exist" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.805922 4932 scope.go:117] "RemoveContainer" containerID="11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2" Feb 18 20:07:16 crc kubenswrapper[4932]: E0218 20:07:16.809678 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2\": container with ID starting with 11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2 not found: ID does not exist" containerID="11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.809730 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2"} err="failed to get container status \"11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2\": rpc error: code = NotFound desc = could not find container \"11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2\": container with ID starting with 11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2 not found: ID does not exist" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.809757 4932 scope.go:117] "RemoveContainer" containerID="2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da" Feb 18 20:07:16 crc kubenswrapper[4932]: E0218 20:07:16.810122 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da\": container with ID starting with 2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da not found: ID does not exist" containerID="2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.810154 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da"} err="failed to get container status \"2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da\": rpc error: code = NotFound desc = could not find container \"2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da\": container with ID starting with 2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da not found: ID does not exist" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.098770 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.184294 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-inventory-0\") pod \"1f19857d-f085-411f-a08f-412d1173ed1c\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.184370 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-ssh-key-openstack-edpm-ipam\") pod \"1f19857d-f085-411f-a08f-412d1173ed1c\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.184416 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhhhw\" (UniqueName: \"kubernetes.io/projected/1f19857d-f085-411f-a08f-412d1173ed1c-kube-api-access-nhhhw\") pod \"1f19857d-f085-411f-a08f-412d1173ed1c\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.190550 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f19857d-f085-411f-a08f-412d1173ed1c-kube-api-access-nhhhw" (OuterVolumeSpecName: "kube-api-access-nhhhw") pod "1f19857d-f085-411f-a08f-412d1173ed1c" (UID: "1f19857d-f085-411f-a08f-412d1173ed1c"). InnerVolumeSpecName "kube-api-access-nhhhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.192082 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9385117b-aef4-4fc9-9633-c237337beea2" path="/var/lib/kubelet/pods/9385117b-aef4-4fc9-9633-c237337beea2/volumes" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.217232 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1f19857d-f085-411f-a08f-412d1173ed1c" (UID: "1f19857d-f085-411f-a08f-412d1173ed1c"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.228344 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1f19857d-f085-411f-a08f-412d1173ed1c" (UID: "1f19857d-f085-411f-a08f-412d1173ed1c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.287300 4932 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.287696 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.287710 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhhhw\" (UniqueName: \"kubernetes.io/projected/1f19857d-f085-411f-a08f-412d1173ed1c-kube-api-access-nhhhw\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.688609 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" event={"ID":"1f19857d-f085-411f-a08f-412d1173ed1c","Type":"ContainerDied","Data":"b76d6ccafd2c0451bf7aa7ed91aceca0b2f73e23637c2896e2b6163faeb9266a"} Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.688655 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b76d6ccafd2c0451bf7aa7ed91aceca0b2f73e23637c2896e2b6163faeb9266a" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.688717 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.809224 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck"] Feb 18 20:07:17 crc kubenswrapper[4932]: E0218 20:07:17.819975 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="extract-utilities" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.820008 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="extract-utilities" Feb 18 20:07:17 crc kubenswrapper[4932]: E0218 20:07:17.820071 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f19857d-f085-411f-a08f-412d1173ed1c" containerName="ssh-known-hosts-edpm-deployment" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.820081 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f19857d-f085-411f-a08f-412d1173ed1c" containerName="ssh-known-hosts-edpm-deployment" Feb 18 20:07:17 crc kubenswrapper[4932]: E0218 20:07:17.820135 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="extract-content" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.820144 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="extract-content" Feb 18 20:07:17 crc kubenswrapper[4932]: E0218 20:07:17.820168 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="registry-server" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.820203 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="registry-server" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.820610 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f19857d-f085-411f-a08f-412d1173ed1c" containerName="ssh-known-hosts-edpm-deployment" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.820643 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="registry-server" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.823731 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck"] Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.823902 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.827121 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.827414 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.827693 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.828031 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.900262 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtkkd\" (UniqueName: \"kubernetes.io/projected/86713106-5952-4409-b655-9f87008c2050-kube-api-access-vtkkd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.900438 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.900472 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.002580 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.002653 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.002704 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtkkd\" (UniqueName: \"kubernetes.io/projected/86713106-5952-4409-b655-9f87008c2050-kube-api-access-vtkkd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.008748 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.015692 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.020148 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtkkd\" (UniqueName: \"kubernetes.io/projected/86713106-5952-4409-b655-9f87008c2050-kube-api-access-vtkkd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.144024 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.671142 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck"] Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.703127 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" event={"ID":"86713106-5952-4409-b655-9f87008c2050","Type":"ContainerStarted","Data":"ed88fc315bbf365fe36e0bdf4960a060dada12ee3c9b91e277fb93921e03f794"} Feb 18 20:07:19 crc kubenswrapper[4932]: I0218 20:07:19.715139 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" event={"ID":"86713106-5952-4409-b655-9f87008c2050","Type":"ContainerStarted","Data":"c1a20ccff30702f4bbf7da383ded31b7d96054a5d70fce66b5e248ac129367ad"} Feb 18 20:07:19 crc kubenswrapper[4932]: I0218 20:07:19.735951 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" podStartSLOduration=2.336633445 podStartE2EDuration="2.735930618s" podCreationTimestamp="2026-02-18 20:07:17 +0000 UTC" firstStartedPulling="2026-02-18 20:07:18.67383061 +0000 UTC m=+2002.255785505" lastFinishedPulling="2026-02-18 20:07:19.073127833 +0000 UTC m=+2002.655082678" observedRunningTime="2026-02-18 20:07:19.732533974 +0000 UTC m=+2003.314488819" watchObservedRunningTime="2026-02-18 20:07:19.735930618 +0000 UTC m=+2003.317885463" Feb 18 20:07:27 crc kubenswrapper[4932]: I0218 20:07:27.606971 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:07:27 crc kubenswrapper[4932]: I0218 20:07:27.608108 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:07:27 crc kubenswrapper[4932]: I0218 20:07:27.805106 4932 generic.go:334] "Generic (PLEG): container finished" podID="86713106-5952-4409-b655-9f87008c2050" containerID="c1a20ccff30702f4bbf7da383ded31b7d96054a5d70fce66b5e248ac129367ad" exitCode=0 Feb 18 20:07:27 crc kubenswrapper[4932]: I0218 20:07:27.805158 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" event={"ID":"86713106-5952-4409-b655-9f87008c2050","Type":"ContainerDied","Data":"c1a20ccff30702f4bbf7da383ded31b7d96054a5d70fce66b5e248ac129367ad"} Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.312251 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.440034 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtkkd\" (UniqueName: \"kubernetes.io/projected/86713106-5952-4409-b655-9f87008c2050-kube-api-access-vtkkd\") pod \"86713106-5952-4409-b655-9f87008c2050\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.440767 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-ssh-key-openstack-edpm-ipam\") pod \"86713106-5952-4409-b655-9f87008c2050\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.441067 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-inventory\") pod \"86713106-5952-4409-b655-9f87008c2050\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.450373 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86713106-5952-4409-b655-9f87008c2050-kube-api-access-vtkkd" (OuterVolumeSpecName: "kube-api-access-vtkkd") pod "86713106-5952-4409-b655-9f87008c2050" (UID: "86713106-5952-4409-b655-9f87008c2050"). InnerVolumeSpecName "kube-api-access-vtkkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.495048 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "86713106-5952-4409-b655-9f87008c2050" (UID: "86713106-5952-4409-b655-9f87008c2050"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.497717 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-inventory" (OuterVolumeSpecName: "inventory") pod "86713106-5952-4409-b655-9f87008c2050" (UID: "86713106-5952-4409-b655-9f87008c2050"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.544678 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.544710 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtkkd\" (UniqueName: \"kubernetes.io/projected/86713106-5952-4409-b655-9f87008c2050-kube-api-access-vtkkd\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.544723 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.831028 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" event={"ID":"86713106-5952-4409-b655-9f87008c2050","Type":"ContainerDied","Data":"ed88fc315bbf365fe36e0bdf4960a060dada12ee3c9b91e277fb93921e03f794"} Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.831088 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed88fc315bbf365fe36e0bdf4960a060dada12ee3c9b91e277fb93921e03f794" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.831166 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.011402 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn"] Feb 18 20:07:30 crc kubenswrapper[4932]: E0218 20:07:30.012345 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86713106-5952-4409-b655-9f87008c2050" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.012361 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="86713106-5952-4409-b655-9f87008c2050" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.012597 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="86713106-5952-4409-b655-9f87008c2050" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.013572 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.016380 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.016516 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.018983 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.022207 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn"] Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.049100 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.156982 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.157048 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.157242 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2fr4\" (UniqueName: \"kubernetes.io/projected/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-kube-api-access-k2fr4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.258851 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2fr4\" (UniqueName: \"kubernetes.io/projected/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-kube-api-access-k2fr4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.259004 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.259040 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.263083 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.264794 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.277589 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2fr4\" (UniqueName: \"kubernetes.io/projected/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-kube-api-access-k2fr4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.372679 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.895457 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.901324 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn"] Feb 18 20:07:31 crc kubenswrapper[4932]: I0218 20:07:31.861616 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" event={"ID":"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96","Type":"ContainerStarted","Data":"4600e294e7206f35dd2b6e87432a29eeca386dd2a6f12b3cded6ac249c1945c9"} Feb 18 20:07:31 crc kubenswrapper[4932]: I0218 20:07:31.862041 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" event={"ID":"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96","Type":"ContainerStarted","Data":"d6f8dba346309a41de3f61f5fa8513166c779bc22740cd9e75ae130e7af4b053"} Feb 18 20:07:31 crc kubenswrapper[4932]: I0218 20:07:31.882585 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" podStartSLOduration=2.4061503269999998 podStartE2EDuration="2.882567268s" podCreationTimestamp="2026-02-18 20:07:29 +0000 UTC" firstStartedPulling="2026-02-18 20:07:30.895086998 +0000 UTC m=+2014.477041853" lastFinishedPulling="2026-02-18 20:07:31.371503939 +0000 UTC m=+2014.953458794" observedRunningTime="2026-02-18 20:07:31.879297227 +0000 UTC m=+2015.461252132" watchObservedRunningTime="2026-02-18 20:07:31.882567268 +0000 UTC m=+2015.464522113" Feb 18 20:07:40 crc kubenswrapper[4932]: I0218 20:07:40.955223 4932 generic.go:334] "Generic (PLEG): container finished" podID="7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96" containerID="4600e294e7206f35dd2b6e87432a29eeca386dd2a6f12b3cded6ac249c1945c9" exitCode=0 Feb 18 20:07:40 crc kubenswrapper[4932]: I0218 20:07:40.955350 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" event={"ID":"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96","Type":"ContainerDied","Data":"4600e294e7206f35dd2b6e87432a29eeca386dd2a6f12b3cded6ac249c1945c9"} Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.452361 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.541295 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2fr4\" (UniqueName: \"kubernetes.io/projected/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-kube-api-access-k2fr4\") pod \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.541479 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-ssh-key-openstack-edpm-ipam\") pod \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.541563 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-inventory\") pod \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.560623 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-kube-api-access-k2fr4" (OuterVolumeSpecName: "kube-api-access-k2fr4") pod "7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96" (UID: "7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96"). InnerVolumeSpecName "kube-api-access-k2fr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.572348 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-inventory" (OuterVolumeSpecName: "inventory") pod "7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96" (UID: "7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.576132 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96" (UID: "7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.644086 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2fr4\" (UniqueName: \"kubernetes.io/projected/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-kube-api-access-k2fr4\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.644131 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.644144 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.981334 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" event={"ID":"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96","Type":"ContainerDied","Data":"d6f8dba346309a41de3f61f5fa8513166c779bc22740cd9e75ae130e7af4b053"} Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.981607 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6f8dba346309a41de3f61f5fa8513166c779bc22740cd9e75ae130e7af4b053" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.981428 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.116601 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq"] Feb 18 20:07:43 crc kubenswrapper[4932]: E0218 20:07:43.117015 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.117034 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.117258 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.117901 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.120005 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.121382 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.121746 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.121766 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.121774 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.123187 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.126558 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.127071 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.148285 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq"] Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.257671 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.257729 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.257766 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.257805 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.257830 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.257910 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258025 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258141 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56x8\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-kube-api-access-k56x8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258373 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258404 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258528 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258561 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258602 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.361234 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.361651 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.361880 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k56x8\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-kube-api-access-k56x8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.362147 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.362331 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.362499 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.362723 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.362857 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.363018 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.363342 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.363528 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.363699 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.363947 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.364224 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.368359 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.368429 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.368587 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.369288 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.370389 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.370874 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.371146 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.371975 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.372130 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.372836 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.374061 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.375774 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.376637 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.384013 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k56x8\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-kube-api-access-k56x8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.449472 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:44 crc kubenswrapper[4932]: I0218 20:07:44.078348 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq"] Feb 18 20:07:45 crc kubenswrapper[4932]: I0218 20:07:45.010225 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" event={"ID":"89184ceb-9c72-46ed-ae3f-27228af58cfc","Type":"ContainerStarted","Data":"1370d89c203eaedd57a26f8365c4d4c46629e61cc2bc1b7a5ded07e0ca770571"} Feb 18 20:07:45 crc kubenswrapper[4932]: I0218 20:07:45.010869 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" event={"ID":"89184ceb-9c72-46ed-ae3f-27228af58cfc","Type":"ContainerStarted","Data":"3f629ae06fe7522c8ae80af28f92eb58a1a8477afcd7e49d5bb7cbec8efa6b05"} Feb 18 20:07:45 crc kubenswrapper[4932]: I0218 20:07:45.036350 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" podStartSLOduration=1.6338945759999999 podStartE2EDuration="2.036333427s" podCreationTimestamp="2026-02-18 20:07:43 +0000 UTC" firstStartedPulling="2026-02-18 20:07:44.085208656 +0000 UTC m=+2027.667163511" lastFinishedPulling="2026-02-18 20:07:44.487647517 +0000 UTC m=+2028.069602362" observedRunningTime="2026-02-18 20:07:45.03360699 +0000 UTC m=+2028.615561835" watchObservedRunningTime="2026-02-18 20:07:45.036333427 +0000 UTC m=+2028.618288262" Feb 18 20:07:57 crc kubenswrapper[4932]: I0218 20:07:57.606431 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:07:57 crc kubenswrapper[4932]: I0218 20:07:57.607039 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:08:21 crc kubenswrapper[4932]: I0218 20:08:21.339615 4932 generic.go:334] "Generic (PLEG): container finished" podID="89184ceb-9c72-46ed-ae3f-27228af58cfc" containerID="1370d89c203eaedd57a26f8365c4d4c46629e61cc2bc1b7a5ded07e0ca770571" exitCode=0 Feb 18 20:08:21 crc kubenswrapper[4932]: I0218 20:08:21.339674 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" event={"ID":"89184ceb-9c72-46ed-ae3f-27228af58cfc","Type":"ContainerDied","Data":"1370d89c203eaedd57a26f8365c4d4c46629e61cc2bc1b7a5ded07e0ca770571"} Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.814421 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939333 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-inventory\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939377 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k56x8\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-kube-api-access-k56x8\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939420 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ssh-key-openstack-edpm-ipam\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939455 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939477 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939501 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ovn-combined-ca-bundle\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939536 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-neutron-metadata-combined-ca-bundle\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939553 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-nova-combined-ca-bundle\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939584 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-telemetry-combined-ca-bundle\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939603 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-bootstrap-combined-ca-bundle\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939643 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-libvirt-combined-ca-bundle\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939681 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939731 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939751 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-repo-setup-combined-ca-bundle\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.945110 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.945472 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.946299 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.947010 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.948074 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.948827 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.949060 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.949738 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.950868 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.951832 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.952085 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.958945 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-kube-api-access-k56x8" (OuterVolumeSpecName: "kube-api-access-k56x8") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "kube-api-access-k56x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.979679 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.981635 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-inventory" (OuterVolumeSpecName: "inventory") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042682 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042735 4932 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042754 4932 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042773 4932 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042788 4932 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042800 4932 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042815 4932 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042829 4932 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042840 4932 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042854 4932 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042868 4932 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042912 4932 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042926 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042937 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k56x8\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-kube-api-access-k56x8\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.360284 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" event={"ID":"89184ceb-9c72-46ed-ae3f-27228af58cfc","Type":"ContainerDied","Data":"3f629ae06fe7522c8ae80af28f92eb58a1a8477afcd7e49d5bb7cbec8efa6b05"} Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.360327 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f629ae06fe7522c8ae80af28f92eb58a1a8477afcd7e49d5bb7cbec8efa6b05" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.360382 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.488920 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962"] Feb 18 20:08:23 crc kubenswrapper[4932]: E0218 20:08:23.489344 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89184ceb-9c72-46ed-ae3f-27228af58cfc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.489361 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="89184ceb-9c72-46ed-ae3f-27228af58cfc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.489553 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="89184ceb-9c72-46ed-ae3f-27228af58cfc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.490223 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.494798 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.495033 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.495033 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.495222 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.496477 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.519552 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962"] Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.560402 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.560466 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.560503 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p667\" (UniqueName: \"kubernetes.io/projected/9c4aa436-f356-454c-b810-66e7cffe0c32-kube-api-access-8p667\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.560552 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c4aa436-f356-454c-b810-66e7cffe0c32-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.560576 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.662627 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.662668 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.662697 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p667\" (UniqueName: \"kubernetes.io/projected/9c4aa436-f356-454c-b810-66e7cffe0c32-kube-api-access-8p667\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.662734 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c4aa436-f356-454c-b810-66e7cffe0c32-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.662754 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.664297 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c4aa436-f356-454c-b810-66e7cffe0c32-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.667560 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.670334 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.681653 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.682539 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p667\" (UniqueName: \"kubernetes.io/projected/9c4aa436-f356-454c-b810-66e7cffe0c32-kube-api-access-8p667\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.814410 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:24 crc kubenswrapper[4932]: I0218 20:08:24.352006 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962"] Feb 18 20:08:24 crc kubenswrapper[4932]: I0218 20:08:24.371745 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" event={"ID":"9c4aa436-f356-454c-b810-66e7cffe0c32","Type":"ContainerStarted","Data":"1caca805311938cfc09992bdd1861fae6f71210dd49992178060515fb60b5a42"} Feb 18 20:08:25 crc kubenswrapper[4932]: I0218 20:08:25.384808 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" event={"ID":"9c4aa436-f356-454c-b810-66e7cffe0c32","Type":"ContainerStarted","Data":"7bfce2eae3e52d734bba86da5ba5caa23d72727cfec425013bf472191821e900"} Feb 18 20:08:25 crc kubenswrapper[4932]: I0218 20:08:25.414835 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" podStartSLOduration=1.922954912 podStartE2EDuration="2.414814735s" podCreationTimestamp="2026-02-18 20:08:23 +0000 UTC" firstStartedPulling="2026-02-18 20:08:24.354862842 +0000 UTC m=+2067.936817687" lastFinishedPulling="2026-02-18 20:08:24.846722665 +0000 UTC m=+2068.428677510" observedRunningTime="2026-02-18 20:08:25.407273578 +0000 UTC m=+2068.989228433" watchObservedRunningTime="2026-02-18 20:08:25.414814735 +0000 UTC m=+2068.996769580" Feb 18 20:08:27 crc kubenswrapper[4932]: I0218 20:08:27.606318 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:08:27 crc kubenswrapper[4932]: I0218 20:08:27.606876 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:08:27 crc kubenswrapper[4932]: I0218 20:08:27.606923 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:08:27 crc kubenswrapper[4932]: I0218 20:08:27.607649 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"93b2aadde96a1cb53f394f160a8c65ff537540cf335aacf73c90625c7fb96dd4"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:08:27 crc kubenswrapper[4932]: I0218 20:08:27.607704 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://93b2aadde96a1cb53f394f160a8c65ff537540cf335aacf73c90625c7fb96dd4" gracePeriod=600 Feb 18 20:08:28 crc kubenswrapper[4932]: I0218 20:08:28.413988 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="93b2aadde96a1cb53f394f160a8c65ff537540cf335aacf73c90625c7fb96dd4" exitCode=0 Feb 18 20:08:28 crc kubenswrapper[4932]: I0218 20:08:28.414079 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"93b2aadde96a1cb53f394f160a8c65ff537540cf335aacf73c90625c7fb96dd4"} Feb 18 20:08:28 crc kubenswrapper[4932]: I0218 20:08:28.414812 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f"} Feb 18 20:08:28 crc kubenswrapper[4932]: I0218 20:08:28.414870 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.048421 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4rc9w"] Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.052240 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.058540 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4rc9w"] Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.093194 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86lx2\" (UniqueName: \"kubernetes.io/projected/bc336435-b073-4c36-91f6-159485fd9213-kube-api-access-86lx2\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.093555 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-catalog-content\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.093609 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-utilities\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.195254 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86lx2\" (UniqueName: \"kubernetes.io/projected/bc336435-b073-4c36-91f6-159485fd9213-kube-api-access-86lx2\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.195417 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-catalog-content\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.195453 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-utilities\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.195879 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-utilities\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.196042 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-catalog-content\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.219131 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86lx2\" (UniqueName: \"kubernetes.io/projected/bc336435-b073-4c36-91f6-159485fd9213-kube-api-access-86lx2\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.402191 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.969056 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4rc9w"] Feb 18 20:08:44 crc kubenswrapper[4932]: W0218 20:08:44.975566 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc336435_b073_4c36_91f6_159485fd9213.slice/crio-b2082728b78a745b64c12b43c60f4a9ceb4e0b21b7b5e961de2814b0437eb84e WatchSource:0}: Error finding container b2082728b78a745b64c12b43c60f4a9ceb4e0b21b7b5e961de2814b0437eb84e: Status 404 returned error can't find the container with id b2082728b78a745b64c12b43c60f4a9ceb4e0b21b7b5e961de2814b0437eb84e Feb 18 20:08:45 crc kubenswrapper[4932]: I0218 20:08:45.607668 4932 generic.go:334] "Generic (PLEG): container finished" podID="bc336435-b073-4c36-91f6-159485fd9213" containerID="e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2" exitCode=0 Feb 18 20:08:45 crc kubenswrapper[4932]: I0218 20:08:45.607764 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rc9w" event={"ID":"bc336435-b073-4c36-91f6-159485fd9213","Type":"ContainerDied","Data":"e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2"} Feb 18 20:08:45 crc kubenswrapper[4932]: I0218 20:08:45.608042 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rc9w" event={"ID":"bc336435-b073-4c36-91f6-159485fd9213","Type":"ContainerStarted","Data":"b2082728b78a745b64c12b43c60f4a9ceb4e0b21b7b5e961de2814b0437eb84e"} Feb 18 20:08:47 crc kubenswrapper[4932]: I0218 20:08:47.630710 4932 generic.go:334] "Generic (PLEG): container finished" podID="bc336435-b073-4c36-91f6-159485fd9213" containerID="712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965" exitCode=0 Feb 18 20:08:47 crc kubenswrapper[4932]: I0218 20:08:47.630783 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rc9w" event={"ID":"bc336435-b073-4c36-91f6-159485fd9213","Type":"ContainerDied","Data":"712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965"} Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.034451 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2vrps"] Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.037471 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.048056 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vrps"] Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.090413 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-catalog-content\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.090763 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzdg\" (UniqueName: \"kubernetes.io/projected/30245549-b2f1-43f7-b45f-14f4ceb99f9f-kube-api-access-sxzdg\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.090984 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-utilities\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.192971 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-catalog-content\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.193037 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzdg\" (UniqueName: \"kubernetes.io/projected/30245549-b2f1-43f7-b45f-14f4ceb99f9f-kube-api-access-sxzdg\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.193254 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-utilities\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.193945 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-catalog-content\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.194133 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-utilities\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.224567 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzdg\" (UniqueName: \"kubernetes.io/projected/30245549-b2f1-43f7-b45f-14f4ceb99f9f-kube-api-access-sxzdg\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.363268 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.659490 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rc9w" event={"ID":"bc336435-b073-4c36-91f6-159485fd9213","Type":"ContainerStarted","Data":"6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6"} Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.689693 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4rc9w" podStartSLOduration=2.203465801 podStartE2EDuration="4.689674054s" podCreationTimestamp="2026-02-18 20:08:44 +0000 UTC" firstStartedPulling="2026-02-18 20:08:45.609344066 +0000 UTC m=+2089.191298911" lastFinishedPulling="2026-02-18 20:08:48.095552319 +0000 UTC m=+2091.677507164" observedRunningTime="2026-02-18 20:08:48.67821005 +0000 UTC m=+2092.260164915" watchObservedRunningTime="2026-02-18 20:08:48.689674054 +0000 UTC m=+2092.271628899" Feb 18 20:08:48 crc kubenswrapper[4932]: W0218 20:08:48.877531 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30245549_b2f1_43f7_b45f_14f4ceb99f9f.slice/crio-354e5d7372bf02f4d4f46ecddcd2781b941e1d503955208f5ff88ae68b99f044 WatchSource:0}: Error finding container 354e5d7372bf02f4d4f46ecddcd2781b941e1d503955208f5ff88ae68b99f044: Status 404 returned error can't find the container with id 354e5d7372bf02f4d4f46ecddcd2781b941e1d503955208f5ff88ae68b99f044 Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.879910 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vrps"] Feb 18 20:08:49 crc kubenswrapper[4932]: I0218 20:08:49.668934 4932 generic.go:334] "Generic (PLEG): container finished" podID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerID="373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27" exitCode=0 Feb 18 20:08:49 crc kubenswrapper[4932]: I0218 20:08:49.670466 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrps" event={"ID":"30245549-b2f1-43f7-b45f-14f4ceb99f9f","Type":"ContainerDied","Data":"373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27"} Feb 18 20:08:49 crc kubenswrapper[4932]: I0218 20:08:49.670488 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrps" event={"ID":"30245549-b2f1-43f7-b45f-14f4ceb99f9f","Type":"ContainerStarted","Data":"354e5d7372bf02f4d4f46ecddcd2781b941e1d503955208f5ff88ae68b99f044"} Feb 18 20:08:50 crc kubenswrapper[4932]: I0218 20:08:50.678469 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrps" event={"ID":"30245549-b2f1-43f7-b45f-14f4ceb99f9f","Type":"ContainerStarted","Data":"8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59"} Feb 18 20:08:53 crc kubenswrapper[4932]: I0218 20:08:53.706262 4932 generic.go:334] "Generic (PLEG): container finished" podID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerID="8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59" exitCode=0 Feb 18 20:08:53 crc kubenswrapper[4932]: I0218 20:08:53.706326 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrps" event={"ID":"30245549-b2f1-43f7-b45f-14f4ceb99f9f","Type":"ContainerDied","Data":"8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59"} Feb 18 20:08:54 crc kubenswrapper[4932]: I0218 20:08:54.402939 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:54 crc kubenswrapper[4932]: I0218 20:08:54.402991 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:54 crc kubenswrapper[4932]: I0218 20:08:54.446822 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:54 crc kubenswrapper[4932]: I0218 20:08:54.764124 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:55 crc kubenswrapper[4932]: I0218 20:08:55.725888 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrps" event={"ID":"30245549-b2f1-43f7-b45f-14f4ceb99f9f","Type":"ContainerStarted","Data":"4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970"} Feb 18 20:08:55 crc kubenswrapper[4932]: I0218 20:08:55.750725 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2vrps" podStartSLOduration=2.22888201 podStartE2EDuration="7.750703986s" podCreationTimestamp="2026-02-18 20:08:48 +0000 UTC" firstStartedPulling="2026-02-18 20:08:49.671488995 +0000 UTC m=+2093.253443840" lastFinishedPulling="2026-02-18 20:08:55.193310971 +0000 UTC m=+2098.775265816" observedRunningTime="2026-02-18 20:08:55.744580105 +0000 UTC m=+2099.326534950" watchObservedRunningTime="2026-02-18 20:08:55.750703986 +0000 UTC m=+2099.332658831" Feb 18 20:08:56 crc kubenswrapper[4932]: I0218 20:08:56.832556 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4rc9w"] Feb 18 20:08:56 crc kubenswrapper[4932]: I0218 20:08:56.833130 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4rc9w" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="registry-server" containerID="cri-o://6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6" gracePeriod=2 Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.343017 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.392365 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-utilities\") pod \"bc336435-b073-4c36-91f6-159485fd9213\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.392474 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-catalog-content\") pod \"bc336435-b073-4c36-91f6-159485fd9213\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.392675 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86lx2\" (UniqueName: \"kubernetes.io/projected/bc336435-b073-4c36-91f6-159485fd9213-kube-api-access-86lx2\") pod \"bc336435-b073-4c36-91f6-159485fd9213\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.393355 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-utilities" (OuterVolumeSpecName: "utilities") pod "bc336435-b073-4c36-91f6-159485fd9213" (UID: "bc336435-b073-4c36-91f6-159485fd9213"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.398475 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc336435-b073-4c36-91f6-159485fd9213-kube-api-access-86lx2" (OuterVolumeSpecName: "kube-api-access-86lx2") pod "bc336435-b073-4c36-91f6-159485fd9213" (UID: "bc336435-b073-4c36-91f6-159485fd9213"). InnerVolumeSpecName "kube-api-access-86lx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.451876 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc336435-b073-4c36-91f6-159485fd9213" (UID: "bc336435-b073-4c36-91f6-159485fd9213"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.494843 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86lx2\" (UniqueName: \"kubernetes.io/projected/bc336435-b073-4c36-91f6-159485fd9213-kube-api-access-86lx2\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.495196 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.495212 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.752230 4932 generic.go:334] "Generic (PLEG): container finished" podID="bc336435-b073-4c36-91f6-159485fd9213" containerID="6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6" exitCode=0 Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.752292 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.752299 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rc9w" event={"ID":"bc336435-b073-4c36-91f6-159485fd9213","Type":"ContainerDied","Data":"6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6"} Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.752459 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rc9w" event={"ID":"bc336435-b073-4c36-91f6-159485fd9213","Type":"ContainerDied","Data":"b2082728b78a745b64c12b43c60f4a9ceb4e0b21b7b5e961de2814b0437eb84e"} Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.752504 4932 scope.go:117] "RemoveContainer" containerID="6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.782813 4932 scope.go:117] "RemoveContainer" containerID="712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.795110 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4rc9w"] Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.804160 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4rc9w"] Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.822298 4932 scope.go:117] "RemoveContainer" containerID="e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.872203 4932 scope.go:117] "RemoveContainer" containerID="6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6" Feb 18 20:08:57 crc kubenswrapper[4932]: E0218 20:08:57.872743 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6\": container with ID starting with 6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6 not found: ID does not exist" containerID="6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.872774 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6"} err="failed to get container status \"6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6\": rpc error: code = NotFound desc = could not find container \"6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6\": container with ID starting with 6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6 not found: ID does not exist" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.872797 4932 scope.go:117] "RemoveContainer" containerID="712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965" Feb 18 20:08:57 crc kubenswrapper[4932]: E0218 20:08:57.873126 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965\": container with ID starting with 712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965 not found: ID does not exist" containerID="712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.873150 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965"} err="failed to get container status \"712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965\": rpc error: code = NotFound desc = could not find container \"712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965\": container with ID starting with 712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965 not found: ID does not exist" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.873164 4932 scope.go:117] "RemoveContainer" containerID="e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2" Feb 18 20:08:57 crc kubenswrapper[4932]: E0218 20:08:57.873447 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2\": container with ID starting with e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2 not found: ID does not exist" containerID="e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.873478 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2"} err="failed to get container status \"e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2\": rpc error: code = NotFound desc = could not find container \"e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2\": container with ID starting with e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2 not found: ID does not exist" Feb 18 20:08:58 crc kubenswrapper[4932]: I0218 20:08:58.363591 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:58 crc kubenswrapper[4932]: I0218 20:08:58.363639 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:59 crc kubenswrapper[4932]: I0218 20:08:59.193070 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc336435-b073-4c36-91f6-159485fd9213" path="/var/lib/kubelet/pods/bc336435-b073-4c36-91f6-159485fd9213/volumes" Feb 18 20:08:59 crc kubenswrapper[4932]: I0218 20:08:59.426984 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2vrps" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="registry-server" probeResult="failure" output=< Feb 18 20:08:59 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 20:08:59 crc kubenswrapper[4932]: > Feb 18 20:09:08 crc kubenswrapper[4932]: I0218 20:09:08.417908 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:09:08 crc kubenswrapper[4932]: I0218 20:09:08.484544 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:09:08 crc kubenswrapper[4932]: I0218 20:09:08.661587 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2vrps"] Feb 18 20:09:09 crc kubenswrapper[4932]: I0218 20:09:09.870187 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2vrps" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="registry-server" containerID="cri-o://4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970" gracePeriod=2 Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.388141 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.460710 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-utilities\") pod \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.460909 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-catalog-content\") pod \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.461014 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxzdg\" (UniqueName: \"kubernetes.io/projected/30245549-b2f1-43f7-b45f-14f4ceb99f9f-kube-api-access-sxzdg\") pod \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.462311 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-utilities" (OuterVolumeSpecName: "utilities") pod "30245549-b2f1-43f7-b45f-14f4ceb99f9f" (UID: "30245549-b2f1-43f7-b45f-14f4ceb99f9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.471604 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30245549-b2f1-43f7-b45f-14f4ceb99f9f-kube-api-access-sxzdg" (OuterVolumeSpecName: "kube-api-access-sxzdg") pod "30245549-b2f1-43f7-b45f-14f4ceb99f9f" (UID: "30245549-b2f1-43f7-b45f-14f4ceb99f9f"). InnerVolumeSpecName "kube-api-access-sxzdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.564329 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.564723 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxzdg\" (UniqueName: \"kubernetes.io/projected/30245549-b2f1-43f7-b45f-14f4ceb99f9f-kube-api-access-sxzdg\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.608910 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30245549-b2f1-43f7-b45f-14f4ceb99f9f" (UID: "30245549-b2f1-43f7-b45f-14f4ceb99f9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.665679 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.879351 4932 generic.go:334] "Generic (PLEG): container finished" podID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerID="4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970" exitCode=0 Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.879394 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrps" event={"ID":"30245549-b2f1-43f7-b45f-14f4ceb99f9f","Type":"ContainerDied","Data":"4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970"} Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.879420 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrps" event={"ID":"30245549-b2f1-43f7-b45f-14f4ceb99f9f","Type":"ContainerDied","Data":"354e5d7372bf02f4d4f46ecddcd2781b941e1d503955208f5ff88ae68b99f044"} Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.879434 4932 scope.go:117] "RemoveContainer" containerID="4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.879550 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.922551 4932 scope.go:117] "RemoveContainer" containerID="8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.925782 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2vrps"] Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.936783 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2vrps"] Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.944701 4932 scope.go:117] "RemoveContainer" containerID="373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.997837 4932 scope.go:117] "RemoveContainer" containerID="4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970" Feb 18 20:09:10 crc kubenswrapper[4932]: E0218 20:09:10.998423 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970\": container with ID starting with 4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970 not found: ID does not exist" containerID="4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.998497 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970"} err="failed to get container status \"4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970\": rpc error: code = NotFound desc = could not find container \"4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970\": container with ID starting with 4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970 not found: ID does not exist" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.998539 4932 scope.go:117] "RemoveContainer" containerID="8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59" Feb 18 20:09:10 crc kubenswrapper[4932]: E0218 20:09:10.998968 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59\": container with ID starting with 8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59 not found: ID does not exist" containerID="8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.999023 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59"} err="failed to get container status \"8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59\": rpc error: code = NotFound desc = could not find container \"8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59\": container with ID starting with 8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59 not found: ID does not exist" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.999061 4932 scope.go:117] "RemoveContainer" containerID="373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27" Feb 18 20:09:10 crc kubenswrapper[4932]: E0218 20:09:10.999408 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27\": container with ID starting with 373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27 not found: ID does not exist" containerID="373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.999450 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27"} err="failed to get container status \"373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27\": rpc error: code = NotFound desc = could not find container \"373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27\": container with ID starting with 373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27 not found: ID does not exist" Feb 18 20:09:11 crc kubenswrapper[4932]: I0218 20:09:11.190221 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" path="/var/lib/kubelet/pods/30245549-b2f1-43f7-b45f-14f4ceb99f9f/volumes" Feb 18 20:09:29 crc kubenswrapper[4932]: I0218 20:09:29.049668 4932 generic.go:334] "Generic (PLEG): container finished" podID="9c4aa436-f356-454c-b810-66e7cffe0c32" containerID="7bfce2eae3e52d734bba86da5ba5caa23d72727cfec425013bf472191821e900" exitCode=0 Feb 18 20:09:29 crc kubenswrapper[4932]: I0218 20:09:29.049785 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" event={"ID":"9c4aa436-f356-454c-b810-66e7cffe0c32","Type":"ContainerDied","Data":"7bfce2eae3e52d734bba86da5ba5caa23d72727cfec425013bf472191821e900"} Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.532778 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.691920 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p667\" (UniqueName: \"kubernetes.io/projected/9c4aa436-f356-454c-b810-66e7cffe0c32-kube-api-access-8p667\") pod \"9c4aa436-f356-454c-b810-66e7cffe0c32\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.692062 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ovn-combined-ca-bundle\") pod \"9c4aa436-f356-454c-b810-66e7cffe0c32\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.692088 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c4aa436-f356-454c-b810-66e7cffe0c32-ovncontroller-config-0\") pod \"9c4aa436-f356-454c-b810-66e7cffe0c32\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.692261 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ssh-key-openstack-edpm-ipam\") pod \"9c4aa436-f356-454c-b810-66e7cffe0c32\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.692294 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-inventory\") pod \"9c4aa436-f356-454c-b810-66e7cffe0c32\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.697285 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9c4aa436-f356-454c-b810-66e7cffe0c32" (UID: "9c4aa436-f356-454c-b810-66e7cffe0c32"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.698441 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c4aa436-f356-454c-b810-66e7cffe0c32-kube-api-access-8p667" (OuterVolumeSpecName: "kube-api-access-8p667") pod "9c4aa436-f356-454c-b810-66e7cffe0c32" (UID: "9c4aa436-f356-454c-b810-66e7cffe0c32"). InnerVolumeSpecName "kube-api-access-8p667". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.722759 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4aa436-f356-454c-b810-66e7cffe0c32-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "9c4aa436-f356-454c-b810-66e7cffe0c32" (UID: "9c4aa436-f356-454c-b810-66e7cffe0c32"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.725863 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-inventory" (OuterVolumeSpecName: "inventory") pod "9c4aa436-f356-454c-b810-66e7cffe0c32" (UID: "9c4aa436-f356-454c-b810-66e7cffe0c32"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.726309 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9c4aa436-f356-454c-b810-66e7cffe0c32" (UID: "9c4aa436-f356-454c-b810-66e7cffe0c32"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.795282 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.795339 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.795359 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p667\" (UniqueName: \"kubernetes.io/projected/9c4aa436-f356-454c-b810-66e7cffe0c32-kube-api-access-8p667\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.795377 4932 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.795395 4932 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c4aa436-f356-454c-b810-66e7cffe0c32-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.076032 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" event={"ID":"9c4aa436-f356-454c-b810-66e7cffe0c32","Type":"ContainerDied","Data":"1caca805311938cfc09992bdd1861fae6f71210dd49992178060515fb60b5a42"} Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.076071 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1caca805311938cfc09992bdd1861fae6f71210dd49992178060515fb60b5a42" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.076092 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.175945 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj"] Feb 18 20:09:31 crc kubenswrapper[4932]: E0218 20:09:31.176728 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4aa436-f356-454c-b810-66e7cffe0c32" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.176751 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4aa436-f356-454c-b810-66e7cffe0c32" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 20:09:31 crc kubenswrapper[4932]: E0218 20:09:31.176770 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="registry-server" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.176778 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="registry-server" Feb 18 20:09:31 crc kubenswrapper[4932]: E0218 20:09:31.176790 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="extract-content" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.176797 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="extract-content" Feb 18 20:09:31 crc kubenswrapper[4932]: E0218 20:09:31.176820 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="extract-content" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.176827 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="extract-content" Feb 18 20:09:31 crc kubenswrapper[4932]: E0218 20:09:31.176846 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="extract-utilities" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.176854 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="extract-utilities" Feb 18 20:09:31 crc kubenswrapper[4932]: E0218 20:09:31.176874 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="registry-server" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.176882 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="registry-server" Feb 18 20:09:31 crc kubenswrapper[4932]: E0218 20:09:31.176896 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="extract-utilities" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.176903 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="extract-utilities" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.177116 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="registry-server" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.177135 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="registry-server" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.177150 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c4aa436-f356-454c-b810-66e7cffe0c32" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.177900 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.180022 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.180083 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.180208 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.181943 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.181950 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.182102 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.194937 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj"] Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.308240 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.308291 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.308485 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.308645 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d46t8\" (UniqueName: \"kubernetes.io/projected/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-kube-api-access-d46t8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.308917 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.308994 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.410387 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.410457 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.410528 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.410568 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.410656 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.410703 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d46t8\" (UniqueName: \"kubernetes.io/projected/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-kube-api-access-d46t8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.414731 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.415143 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.416519 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.416972 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.425515 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.429025 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d46t8\" (UniqueName: \"kubernetes.io/projected/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-kube-api-access-d46t8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.503777 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:32 crc kubenswrapper[4932]: I0218 20:09:32.149528 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj"] Feb 18 20:09:33 crc kubenswrapper[4932]: I0218 20:09:33.100914 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" event={"ID":"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c","Type":"ContainerStarted","Data":"755fcb39e1b91cd8b987a4bfa68473f8016f747f8c5758a79f4905e1b2df2117"} Feb 18 20:09:33 crc kubenswrapper[4932]: I0218 20:09:33.101185 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" event={"ID":"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c","Type":"ContainerStarted","Data":"286150e5518249c2629c065fb8f36ccd7d32311c5a5490e9bc7e7cbcc0367a3d"} Feb 18 20:09:33 crc kubenswrapper[4932]: I0218 20:09:33.124097 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" podStartSLOduration=1.547606913 podStartE2EDuration="2.124075829s" podCreationTimestamp="2026-02-18 20:09:31 +0000 UTC" firstStartedPulling="2026-02-18 20:09:32.167925114 +0000 UTC m=+2135.749879959" lastFinishedPulling="2026-02-18 20:09:32.74439403 +0000 UTC m=+2136.326348875" observedRunningTime="2026-02-18 20:09:33.123588327 +0000 UTC m=+2136.705543172" watchObservedRunningTime="2026-02-18 20:09:33.124075829 +0000 UTC m=+2136.706030674" Feb 18 20:09:56 crc kubenswrapper[4932]: I0218 20:09:56.968001 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hwcb8"] Feb 18 20:09:56 crc kubenswrapper[4932]: I0218 20:09:56.970591 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:56 crc kubenswrapper[4932]: I0218 20:09:56.983807 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwcb8"] Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.170130 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-utilities\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.170356 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slw5j\" (UniqueName: \"kubernetes.io/projected/d46440a6-e998-48df-a6ee-83e196dc6f97-kube-api-access-slw5j\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.170489 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-catalog-content\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.272316 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-utilities\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.272427 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slw5j\" (UniqueName: \"kubernetes.io/projected/d46440a6-e998-48df-a6ee-83e196dc6f97-kube-api-access-slw5j\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.272491 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-catalog-content\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.273026 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-utilities\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.273070 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-catalog-content\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.304203 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slw5j\" (UniqueName: \"kubernetes.io/projected/d46440a6-e998-48df-a6ee-83e196dc6f97-kube-api-access-slw5j\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.590717 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:58 crc kubenswrapper[4932]: I0218 20:09:58.094631 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwcb8"] Feb 18 20:09:58 crc kubenswrapper[4932]: I0218 20:09:58.357060 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwcb8" event={"ID":"d46440a6-e998-48df-a6ee-83e196dc6f97","Type":"ContainerStarted","Data":"5e3ce083f25ec7c111ba25d38562a008bf0ce7b8f840b83df011400e166aad5a"} Feb 18 20:09:59 crc kubenswrapper[4932]: I0218 20:09:59.369959 4932 generic.go:334] "Generic (PLEG): container finished" podID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerID="ceae045b7c2c872fa7d9d4bedd72b0329dffb4ff15e8c477344b3b13adecbd9f" exitCode=0 Feb 18 20:09:59 crc kubenswrapper[4932]: I0218 20:09:59.370015 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwcb8" event={"ID":"d46440a6-e998-48df-a6ee-83e196dc6f97","Type":"ContainerDied","Data":"ceae045b7c2c872fa7d9d4bedd72b0329dffb4ff15e8c477344b3b13adecbd9f"} Feb 18 20:10:00 crc kubenswrapper[4932]: I0218 20:10:00.384582 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwcb8" event={"ID":"d46440a6-e998-48df-a6ee-83e196dc6f97","Type":"ContainerStarted","Data":"dc22e969de30f320df29d8fb3300ead1ccce40e8fd0a68c631edab60b4aae345"} Feb 18 20:10:01 crc kubenswrapper[4932]: I0218 20:10:01.398882 4932 generic.go:334] "Generic (PLEG): container finished" podID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerID="dc22e969de30f320df29d8fb3300ead1ccce40e8fd0a68c631edab60b4aae345" exitCode=0 Feb 18 20:10:01 crc kubenswrapper[4932]: I0218 20:10:01.398925 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwcb8" event={"ID":"d46440a6-e998-48df-a6ee-83e196dc6f97","Type":"ContainerDied","Data":"dc22e969de30f320df29d8fb3300ead1ccce40e8fd0a68c631edab60b4aae345"} Feb 18 20:10:02 crc kubenswrapper[4932]: I0218 20:10:02.415538 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwcb8" event={"ID":"d46440a6-e998-48df-a6ee-83e196dc6f97","Type":"ContainerStarted","Data":"d5e3d44b8c687a4ed0339e8d165e82f6353dbc670604a3f79e4367feb38ddc1a"} Feb 18 20:10:02 crc kubenswrapper[4932]: I0218 20:10:02.444106 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hwcb8" podStartSLOduration=3.966374052 podStartE2EDuration="6.444080615s" podCreationTimestamp="2026-02-18 20:09:56 +0000 UTC" firstStartedPulling="2026-02-18 20:09:59.374132765 +0000 UTC m=+2162.956087610" lastFinishedPulling="2026-02-18 20:10:01.851839328 +0000 UTC m=+2165.433794173" observedRunningTime="2026-02-18 20:10:02.432874577 +0000 UTC m=+2166.014829442" watchObservedRunningTime="2026-02-18 20:10:02.444080615 +0000 UTC m=+2166.026035470" Feb 18 20:10:07 crc kubenswrapper[4932]: I0218 20:10:07.591475 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:10:07 crc kubenswrapper[4932]: I0218 20:10:07.592651 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:10:07 crc kubenswrapper[4932]: I0218 20:10:07.658989 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:10:08 crc kubenswrapper[4932]: I0218 20:10:08.525057 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:10:08 crc kubenswrapper[4932]: I0218 20:10:08.575849 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwcb8"] Feb 18 20:10:10 crc kubenswrapper[4932]: I0218 20:10:10.490566 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hwcb8" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="registry-server" containerID="cri-o://d5e3d44b8c687a4ed0339e8d165e82f6353dbc670604a3f79e4367feb38ddc1a" gracePeriod=2 Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.504870 4932 generic.go:334] "Generic (PLEG): container finished" podID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerID="d5e3d44b8c687a4ed0339e8d165e82f6353dbc670604a3f79e4367feb38ddc1a" exitCode=0 Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.504970 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwcb8" event={"ID":"d46440a6-e998-48df-a6ee-83e196dc6f97","Type":"ContainerDied","Data":"d5e3d44b8c687a4ed0339e8d165e82f6353dbc670604a3f79e4367feb38ddc1a"} Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.505392 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwcb8" event={"ID":"d46440a6-e998-48df-a6ee-83e196dc6f97","Type":"ContainerDied","Data":"5e3ce083f25ec7c111ba25d38562a008bf0ce7b8f840b83df011400e166aad5a"} Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.505416 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e3ce083f25ec7c111ba25d38562a008bf0ce7b8f840b83df011400e166aad5a" Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.508208 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.698906 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slw5j\" (UniqueName: \"kubernetes.io/projected/d46440a6-e998-48df-a6ee-83e196dc6f97-kube-api-access-slw5j\") pod \"d46440a6-e998-48df-a6ee-83e196dc6f97\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.699040 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-utilities\") pod \"d46440a6-e998-48df-a6ee-83e196dc6f97\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.699494 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-catalog-content\") pod \"d46440a6-e998-48df-a6ee-83e196dc6f97\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.700054 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-utilities" (OuterVolumeSpecName: "utilities") pod "d46440a6-e998-48df-a6ee-83e196dc6f97" (UID: "d46440a6-e998-48df-a6ee-83e196dc6f97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.701955 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.705521 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d46440a6-e998-48df-a6ee-83e196dc6f97-kube-api-access-slw5j" (OuterVolumeSpecName: "kube-api-access-slw5j") pod "d46440a6-e998-48df-a6ee-83e196dc6f97" (UID: "d46440a6-e998-48df-a6ee-83e196dc6f97"). InnerVolumeSpecName "kube-api-access-slw5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.726437 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d46440a6-e998-48df-a6ee-83e196dc6f97" (UID: "d46440a6-e998-48df-a6ee-83e196dc6f97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.803743 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.803796 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slw5j\" (UniqueName: \"kubernetes.io/projected/d46440a6-e998-48df-a6ee-83e196dc6f97-kube-api-access-slw5j\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:12 crc kubenswrapper[4932]: I0218 20:10:12.527126 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:10:12 crc kubenswrapper[4932]: I0218 20:10:12.574429 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwcb8"] Feb 18 20:10:12 crc kubenswrapper[4932]: I0218 20:10:12.583916 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwcb8"] Feb 18 20:10:13 crc kubenswrapper[4932]: I0218 20:10:13.200954 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" path="/var/lib/kubelet/pods/d46440a6-e998-48df-a6ee-83e196dc6f97/volumes" Feb 18 20:10:20 crc kubenswrapper[4932]: I0218 20:10:20.617394 4932 generic.go:334] "Generic (PLEG): container finished" podID="c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" containerID="755fcb39e1b91cd8b987a4bfa68473f8016f747f8c5758a79f4905e1b2df2117" exitCode=0 Feb 18 20:10:20 crc kubenswrapper[4932]: I0218 20:10:20.617532 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" event={"ID":"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c","Type":"ContainerDied","Data":"755fcb39e1b91cd8b987a4bfa68473f8016f747f8c5758a79f4905e1b2df2117"} Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.172708 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.323425 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-nova-metadata-neutron-config-0\") pod \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.323715 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.323864 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d46t8\" (UniqueName: \"kubernetes.io/projected/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-kube-api-access-d46t8\") pod \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.324097 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-inventory\") pod \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.324265 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-ssh-key-openstack-edpm-ipam\") pod \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.324454 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-metadata-combined-ca-bundle\") pod \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.329770 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-kube-api-access-d46t8" (OuterVolumeSpecName: "kube-api-access-d46t8") pod "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" (UID: "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c"). InnerVolumeSpecName "kube-api-access-d46t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.336447 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" (UID: "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.353644 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-inventory" (OuterVolumeSpecName: "inventory") pod "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" (UID: "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.369140 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" (UID: "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.373082 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" (UID: "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.378634 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" (UID: "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.428675 4932 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.428753 4932 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.428789 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d46t8\" (UniqueName: \"kubernetes.io/projected/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-kube-api-access-d46t8\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.428817 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.428841 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.428867 4932 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.640805 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.640791 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" event={"ID":"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c","Type":"ContainerDied","Data":"286150e5518249c2629c065fb8f36ccd7d32311c5a5490e9bc7e7cbcc0367a3d"} Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.640886 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="286150e5518249c2629c065fb8f36ccd7d32311c5a5490e9bc7e7cbcc0367a3d" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.912672 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj"] Feb 18 20:10:22 crc kubenswrapper[4932]: E0218 20:10:22.913143 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.913168 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:22 crc kubenswrapper[4932]: E0218 20:10:22.913205 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="extract-utilities" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.913213 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="extract-utilities" Feb 18 20:10:22 crc kubenswrapper[4932]: E0218 20:10:22.913246 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="extract-content" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.913255 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="extract-content" Feb 18 20:10:22 crc kubenswrapper[4932]: E0218 20:10:22.913268 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="registry-server" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.913276 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="registry-server" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.913487 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.913526 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="registry-server" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.914399 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.916615 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.916905 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.916915 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.919623 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.924099 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj"] Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.955921 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.056641 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.056807 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.056866 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.056919 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf6nh\" (UniqueName: \"kubernetes.io/projected/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-kube-api-access-gf6nh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.057292 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.159701 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.160110 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.160139 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.160205 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf6nh\" (UniqueName: \"kubernetes.io/projected/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-kube-api-access-gf6nh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.160315 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.170757 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.171029 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.171731 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.174579 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.180591 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf6nh\" (UniqueName: \"kubernetes.io/projected/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-kube-api-access-gf6nh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.264762 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: W0218 20:10:23.784415 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc47c3fb_c74e_42df_ba84_e4c58dbbe796.slice/crio-47361800b657b675d446710f1d30d0a24640937a20117a73ecbb476cd259604d WatchSource:0}: Error finding container 47361800b657b675d446710f1d30d0a24640937a20117a73ecbb476cd259604d: Status 404 returned error can't find the container with id 47361800b657b675d446710f1d30d0a24640937a20117a73ecbb476cd259604d Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.786477 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj"] Feb 18 20:10:24 crc kubenswrapper[4932]: I0218 20:10:24.661166 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" event={"ID":"bc47c3fb-c74e-42df-ba84-e4c58dbbe796","Type":"ContainerStarted","Data":"bf95cd775c67b15f2eb4cba258d588c2723c45e736242db964a5a9abd7443fcb"} Feb 18 20:10:24 crc kubenswrapper[4932]: I0218 20:10:24.661540 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" event={"ID":"bc47c3fb-c74e-42df-ba84-e4c58dbbe796","Type":"ContainerStarted","Data":"47361800b657b675d446710f1d30d0a24640937a20117a73ecbb476cd259604d"} Feb 18 20:10:24 crc kubenswrapper[4932]: I0218 20:10:24.681078 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" podStartSLOduration=2.180694446 podStartE2EDuration="2.681059177s" podCreationTimestamp="2026-02-18 20:10:22 +0000 UTC" firstStartedPulling="2026-02-18 20:10:23.787217645 +0000 UTC m=+2187.369172490" lastFinishedPulling="2026-02-18 20:10:24.287582376 +0000 UTC m=+2187.869537221" observedRunningTime="2026-02-18 20:10:24.677278143 +0000 UTC m=+2188.259232988" watchObservedRunningTime="2026-02-18 20:10:24.681059177 +0000 UTC m=+2188.263014022" Feb 18 20:10:27 crc kubenswrapper[4932]: I0218 20:10:27.606130 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:10:27 crc kubenswrapper[4932]: I0218 20:10:27.606764 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:10:57 crc kubenswrapper[4932]: I0218 20:10:57.606310 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:10:57 crc kubenswrapper[4932]: I0218 20:10:57.607016 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:11:27 crc kubenswrapper[4932]: I0218 20:11:27.606102 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:11:27 crc kubenswrapper[4932]: I0218 20:11:27.606602 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:11:27 crc kubenswrapper[4932]: I0218 20:11:27.606648 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:11:27 crc kubenswrapper[4932]: I0218 20:11:27.607186 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:11:27 crc kubenswrapper[4932]: I0218 20:11:27.607294 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" gracePeriod=600 Feb 18 20:11:27 crc kubenswrapper[4932]: E0218 20:11:27.731868 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:11:28 crc kubenswrapper[4932]: I0218 20:11:28.347565 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" exitCode=0 Feb 18 20:11:28 crc kubenswrapper[4932]: I0218 20:11:28.347631 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f"} Feb 18 20:11:28 crc kubenswrapper[4932]: I0218 20:11:28.347687 4932 scope.go:117] "RemoveContainer" containerID="93b2aadde96a1cb53f394f160a8c65ff537540cf335aacf73c90625c7fb96dd4" Feb 18 20:11:28 crc kubenswrapper[4932]: I0218 20:11:28.348828 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:11:28 crc kubenswrapper[4932]: E0218 20:11:28.349427 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:11:43 crc kubenswrapper[4932]: I0218 20:11:43.180791 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:11:43 crc kubenswrapper[4932]: E0218 20:11:43.182252 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:11:57 crc kubenswrapper[4932]: I0218 20:11:57.180507 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:11:57 crc kubenswrapper[4932]: E0218 20:11:57.181642 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:12:12 crc kubenswrapper[4932]: I0218 20:12:12.179782 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:12:12 crc kubenswrapper[4932]: E0218 20:12:12.180763 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:12:27 crc kubenswrapper[4932]: I0218 20:12:27.188695 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:12:27 crc kubenswrapper[4932]: E0218 20:12:27.189443 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:12:38 crc kubenswrapper[4932]: I0218 20:12:38.179547 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:12:38 crc kubenswrapper[4932]: E0218 20:12:38.180441 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:12:51 crc kubenswrapper[4932]: I0218 20:12:51.180515 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:12:51 crc kubenswrapper[4932]: E0218 20:12:51.181255 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:13:05 crc kubenswrapper[4932]: I0218 20:13:05.179844 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:13:05 crc kubenswrapper[4932]: E0218 20:13:05.180961 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:13:18 crc kubenswrapper[4932]: I0218 20:13:18.180495 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:13:18 crc kubenswrapper[4932]: E0218 20:13:18.181843 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:13:29 crc kubenswrapper[4932]: I0218 20:13:29.179705 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:13:29 crc kubenswrapper[4932]: E0218 20:13:29.180572 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:13:44 crc kubenswrapper[4932]: I0218 20:13:44.179815 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:13:44 crc kubenswrapper[4932]: E0218 20:13:44.180829 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:13:58 crc kubenswrapper[4932]: I0218 20:13:58.180977 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:13:58 crc kubenswrapper[4932]: E0218 20:13:58.183368 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:14:09 crc kubenswrapper[4932]: I0218 20:14:09.179973 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:14:09 crc kubenswrapper[4932]: E0218 20:14:09.180704 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:14:18 crc kubenswrapper[4932]: I0218 20:14:18.136963 4932 generic.go:334] "Generic (PLEG): container finished" podID="bc47c3fb-c74e-42df-ba84-e4c58dbbe796" containerID="bf95cd775c67b15f2eb4cba258d588c2723c45e736242db964a5a9abd7443fcb" exitCode=0 Feb 18 20:14:18 crc kubenswrapper[4932]: I0218 20:14:18.137124 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" event={"ID":"bc47c3fb-c74e-42df-ba84-e4c58dbbe796","Type":"ContainerDied","Data":"bf95cd775c67b15f2eb4cba258d588c2723c45e736242db964a5a9abd7443fcb"} Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.574444 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.675477 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf6nh\" (UniqueName: \"kubernetes.io/projected/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-kube-api-access-gf6nh\") pod \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.675555 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-secret-0\") pod \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.675671 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-ssh-key-openstack-edpm-ipam\") pod \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.675744 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-combined-ca-bundle\") pod \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.675794 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-inventory\") pod \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.683560 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "bc47c3fb-c74e-42df-ba84-e4c58dbbe796" (UID: "bc47c3fb-c74e-42df-ba84-e4c58dbbe796"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.683617 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-kube-api-access-gf6nh" (OuterVolumeSpecName: "kube-api-access-gf6nh") pod "bc47c3fb-c74e-42df-ba84-e4c58dbbe796" (UID: "bc47c3fb-c74e-42df-ba84-e4c58dbbe796"). InnerVolumeSpecName "kube-api-access-gf6nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.707966 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bc47c3fb-c74e-42df-ba84-e4c58dbbe796" (UID: "bc47c3fb-c74e-42df-ba84-e4c58dbbe796"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.720141 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "bc47c3fb-c74e-42df-ba84-e4c58dbbe796" (UID: "bc47c3fb-c74e-42df-ba84-e4c58dbbe796"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.733416 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-inventory" (OuterVolumeSpecName: "inventory") pod "bc47c3fb-c74e-42df-ba84-e4c58dbbe796" (UID: "bc47c3fb-c74e-42df-ba84-e4c58dbbe796"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.778567 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.778655 4932 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.778680 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.778700 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf6nh\" (UniqueName: \"kubernetes.io/projected/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-kube-api-access-gf6nh\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.778720 4932 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.162746 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" event={"ID":"bc47c3fb-c74e-42df-ba84-e4c58dbbe796","Type":"ContainerDied","Data":"47361800b657b675d446710f1d30d0a24640937a20117a73ecbb476cd259604d"} Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.162813 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47361800b657b675d446710f1d30d0a24640937a20117a73ecbb476cd259604d" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.163413 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.296619 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk"] Feb 18 20:14:20 crc kubenswrapper[4932]: E0218 20:14:20.297265 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc47c3fb-c74e-42df-ba84-e4c58dbbe796" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.297360 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc47c3fb-c74e-42df-ba84-e4c58dbbe796" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.297607 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc47c3fb-c74e-42df-ba84-e4c58dbbe796" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.298396 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.301227 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.301313 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.303220 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.305092 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.305111 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.305591 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.306256 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.319274 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk"] Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.391712 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392045 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vzz\" (UniqueName: \"kubernetes.io/projected/8af71c97-85dc-46f5-9fe0-7e4827f3e981-kube-api-access-g9vzz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392228 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392315 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392451 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392592 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392681 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392762 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392839 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.494408 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.495075 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.495276 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.495463 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.495687 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.495866 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vzz\" (UniqueName: \"kubernetes.io/projected/8af71c97-85dc-46f5-9fe0-7e4827f3e981-kube-api-access-g9vzz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.496063 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.496271 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.496576 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.498011 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.500503 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.500575 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.501610 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.502186 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.502199 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.502243 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.502463 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.522779 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vzz\" (UniqueName: \"kubernetes.io/projected/8af71c97-85dc-46f5-9fe0-7e4827f3e981-kube-api-access-g9vzz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.617578 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:21 crc kubenswrapper[4932]: I0218 20:14:21.199628 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk"] Feb 18 20:14:21 crc kubenswrapper[4932]: I0218 20:14:21.203736 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:14:22 crc kubenswrapper[4932]: I0218 20:14:22.178802 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:14:22 crc kubenswrapper[4932]: E0218 20:14:22.179290 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:14:22 crc kubenswrapper[4932]: I0218 20:14:22.182093 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" event={"ID":"8af71c97-85dc-46f5-9fe0-7e4827f3e981","Type":"ContainerStarted","Data":"92107d2c8f069f6435154109fac7c79f113f6ea4864ec704bdf5eec52f4f21b0"} Feb 18 20:14:22 crc kubenswrapper[4932]: I0218 20:14:22.182125 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" event={"ID":"8af71c97-85dc-46f5-9fe0-7e4827f3e981","Type":"ContainerStarted","Data":"95205dea94bf7eafa594ecbadc936061cd8dadbc7f5242ac8efb85f35357d2f5"} Feb 18 20:14:22 crc kubenswrapper[4932]: I0218 20:14:22.206142 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" podStartSLOduration=1.7491718939999998 podStartE2EDuration="2.206117108s" podCreationTimestamp="2026-02-18 20:14:20 +0000 UTC" firstStartedPulling="2026-02-18 20:14:21.203534221 +0000 UTC m=+2424.785489066" lastFinishedPulling="2026-02-18 20:14:21.660479425 +0000 UTC m=+2425.242434280" observedRunningTime="2026-02-18 20:14:22.200140339 +0000 UTC m=+2425.782095194" watchObservedRunningTime="2026-02-18 20:14:22.206117108 +0000 UTC m=+2425.788071973" Feb 18 20:14:34 crc kubenswrapper[4932]: I0218 20:14:34.179825 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:14:34 crc kubenswrapper[4932]: E0218 20:14:34.180586 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:14:45 crc kubenswrapper[4932]: I0218 20:14:45.179812 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:14:45 crc kubenswrapper[4932]: E0218 20:14:45.181307 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:14:57 crc kubenswrapper[4932]: I0218 20:14:57.197694 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:14:57 crc kubenswrapper[4932]: E0218 20:14:57.198726 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.144572 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl"] Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.147183 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.151739 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.152589 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.154698 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl"] Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.227357 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43cf3e74-b4e7-4f54-b21c-cf9018235782-config-volume\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.227529 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szgvq\" (UniqueName: \"kubernetes.io/projected/43cf3e74-b4e7-4f54-b21c-cf9018235782-kube-api-access-szgvq\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.227580 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43cf3e74-b4e7-4f54-b21c-cf9018235782-secret-volume\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.330331 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43cf3e74-b4e7-4f54-b21c-cf9018235782-secret-volume\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.330548 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43cf3e74-b4e7-4f54-b21c-cf9018235782-config-volume\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.330686 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szgvq\" (UniqueName: \"kubernetes.io/projected/43cf3e74-b4e7-4f54-b21c-cf9018235782-kube-api-access-szgvq\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.331361 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43cf3e74-b4e7-4f54-b21c-cf9018235782-config-volume\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.336093 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43cf3e74-b4e7-4f54-b21c-cf9018235782-secret-volume\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.346488 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szgvq\" (UniqueName: \"kubernetes.io/projected/43cf3e74-b4e7-4f54-b21c-cf9018235782-kube-api-access-szgvq\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.481421 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.940412 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl"] Feb 18 20:15:01 crc kubenswrapper[4932]: I0218 20:15:01.602782 4932 generic.go:334] "Generic (PLEG): container finished" podID="43cf3e74-b4e7-4f54-b21c-cf9018235782" containerID="b79875069ecc1431ced41ee0aadf13bbe89c7ee6b34078234cc6eb1c6d79dd0b" exitCode=0 Feb 18 20:15:01 crc kubenswrapper[4932]: I0218 20:15:01.602832 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" event={"ID":"43cf3e74-b4e7-4f54-b21c-cf9018235782","Type":"ContainerDied","Data":"b79875069ecc1431ced41ee0aadf13bbe89c7ee6b34078234cc6eb1c6d79dd0b"} Feb 18 20:15:01 crc kubenswrapper[4932]: I0218 20:15:01.602861 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" event={"ID":"43cf3e74-b4e7-4f54-b21c-cf9018235782","Type":"ContainerStarted","Data":"29e750d064faaab62028bef54cf34b20e09184983057ee8ad64092c7db2f70cf"} Feb 18 20:15:02 crc kubenswrapper[4932]: I0218 20:15:02.979834 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.088156 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43cf3e74-b4e7-4f54-b21c-cf9018235782-config-volume\") pod \"43cf3e74-b4e7-4f54-b21c-cf9018235782\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.088235 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43cf3e74-b4e7-4f54-b21c-cf9018235782-secret-volume\") pod \"43cf3e74-b4e7-4f54-b21c-cf9018235782\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.088311 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szgvq\" (UniqueName: \"kubernetes.io/projected/43cf3e74-b4e7-4f54-b21c-cf9018235782-kube-api-access-szgvq\") pod \"43cf3e74-b4e7-4f54-b21c-cf9018235782\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.089144 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43cf3e74-b4e7-4f54-b21c-cf9018235782-config-volume" (OuterVolumeSpecName: "config-volume") pod "43cf3e74-b4e7-4f54-b21c-cf9018235782" (UID: "43cf3e74-b4e7-4f54-b21c-cf9018235782"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.093864 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43cf3e74-b4e7-4f54-b21c-cf9018235782-kube-api-access-szgvq" (OuterVolumeSpecName: "kube-api-access-szgvq") pod "43cf3e74-b4e7-4f54-b21c-cf9018235782" (UID: "43cf3e74-b4e7-4f54-b21c-cf9018235782"). InnerVolumeSpecName "kube-api-access-szgvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.100546 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43cf3e74-b4e7-4f54-b21c-cf9018235782-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "43cf3e74-b4e7-4f54-b21c-cf9018235782" (UID: "43cf3e74-b4e7-4f54-b21c-cf9018235782"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.201207 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szgvq\" (UniqueName: \"kubernetes.io/projected/43cf3e74-b4e7-4f54-b21c-cf9018235782-kube-api-access-szgvq\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.201273 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43cf3e74-b4e7-4f54-b21c-cf9018235782-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.201287 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43cf3e74-b4e7-4f54-b21c-cf9018235782-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.624659 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" event={"ID":"43cf3e74-b4e7-4f54-b21c-cf9018235782","Type":"ContainerDied","Data":"29e750d064faaab62028bef54cf34b20e09184983057ee8ad64092c7db2f70cf"} Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.624736 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29e750d064faaab62028bef54cf34b20e09184983057ee8ad64092c7db2f70cf" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.624811 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:04 crc kubenswrapper[4932]: I0218 20:15:04.055958 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc"] Feb 18 20:15:04 crc kubenswrapper[4932]: I0218 20:15:04.064697 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc"] Feb 18 20:15:05 crc kubenswrapper[4932]: I0218 20:15:05.192649 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="048e17bc-05bf-40e4-9f40-87d936fcf772" path="/var/lib/kubelet/pods/048e17bc-05bf-40e4-9f40-87d936fcf772/volumes" Feb 18 20:15:05 crc kubenswrapper[4932]: I0218 20:15:05.879013 4932 scope.go:117] "RemoveContainer" containerID="67de493045dcef40d7dcd7366beacc478832b3155bced8f9164fd20b4a4dc42d" Feb 18 20:15:12 crc kubenswrapper[4932]: I0218 20:15:12.179478 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:15:12 crc kubenswrapper[4932]: E0218 20:15:12.180585 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:15:23 crc kubenswrapper[4932]: I0218 20:15:23.179520 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:15:23 crc kubenswrapper[4932]: E0218 20:15:23.180409 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:15:37 crc kubenswrapper[4932]: I0218 20:15:37.198569 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:15:37 crc kubenswrapper[4932]: E0218 20:15:37.199425 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:15:52 crc kubenswrapper[4932]: I0218 20:15:52.179818 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:15:52 crc kubenswrapper[4932]: E0218 20:15:52.180564 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:16:05 crc kubenswrapper[4932]: I0218 20:16:05.180129 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:16:05 crc kubenswrapper[4932]: E0218 20:16:05.181533 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:16:05 crc kubenswrapper[4932]: I0218 20:16:05.957015 4932 scope.go:117] "RemoveContainer" containerID="dc22e969de30f320df29d8fb3300ead1ccce40e8fd0a68c631edab60b4aae345" Feb 18 20:16:05 crc kubenswrapper[4932]: I0218 20:16:05.998851 4932 scope.go:117] "RemoveContainer" containerID="d5e3d44b8c687a4ed0339e8d165e82f6353dbc670604a3f79e4367feb38ddc1a" Feb 18 20:16:06 crc kubenswrapper[4932]: I0218 20:16:06.063449 4932 scope.go:117] "RemoveContainer" containerID="ceae045b7c2c872fa7d9d4bedd72b0329dffb4ff15e8c477344b3b13adecbd9f" Feb 18 20:16:19 crc kubenswrapper[4932]: I0218 20:16:19.179705 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:16:19 crc kubenswrapper[4932]: E0218 20:16:19.180906 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:16:34 crc kubenswrapper[4932]: I0218 20:16:34.182136 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:16:34 crc kubenswrapper[4932]: I0218 20:16:34.689154 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"0d641d1880a050cdf1021a445fa79e88f90ca1f340fe0f38bc6a038f7b103aec"} Feb 18 20:16:51 crc kubenswrapper[4932]: I0218 20:16:51.878728 4932 generic.go:334] "Generic (PLEG): container finished" podID="8af71c97-85dc-46f5-9fe0-7e4827f3e981" containerID="92107d2c8f069f6435154109fac7c79f113f6ea4864ec704bdf5eec52f4f21b0" exitCode=0 Feb 18 20:16:51 crc kubenswrapper[4932]: I0218 20:16:51.878849 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" event={"ID":"8af71c97-85dc-46f5-9fe0-7e4827f3e981","Type":"ContainerDied","Data":"92107d2c8f069f6435154109fac7c79f113f6ea4864ec704bdf5eec52f4f21b0"} Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.321794 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446408 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-combined-ca-bundle\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446479 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9vzz\" (UniqueName: \"kubernetes.io/projected/8af71c97-85dc-46f5-9fe0-7e4827f3e981-kube-api-access-g9vzz\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446525 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-extra-config-0\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446601 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-0\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446629 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-ssh-key-openstack-edpm-ipam\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446731 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-0\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446773 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-1\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446873 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-1\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446973 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-inventory\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.452347 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.453851 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af71c97-85dc-46f5-9fe0-7e4827f3e981-kube-api-access-g9vzz" (OuterVolumeSpecName: "kube-api-access-g9vzz") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "kube-api-access-g9vzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.478342 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.479685 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-inventory" (OuterVolumeSpecName: "inventory") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.479894 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.481407 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.483508 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.487408 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.487486 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549232 4932 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549268 4932 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549281 4932 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549292 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549306 4932 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549316 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9vzz\" (UniqueName: \"kubernetes.io/projected/8af71c97-85dc-46f5-9fe0-7e4827f3e981-kube-api-access-g9vzz\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549329 4932 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549340 4932 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549383 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.896871 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" event={"ID":"8af71c97-85dc-46f5-9fe0-7e4827f3e981","Type":"ContainerDied","Data":"95205dea94bf7eafa594ecbadc936061cd8dadbc7f5242ac8efb85f35357d2f5"} Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.896920 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95205dea94bf7eafa594ecbadc936061cd8dadbc7f5242ac8efb85f35357d2f5" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.896917 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.095691 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv"] Feb 18 20:16:54 crc kubenswrapper[4932]: E0218 20:16:54.096529 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af71c97-85dc-46f5-9fe0-7e4827f3e981" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.096555 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af71c97-85dc-46f5-9fe0-7e4827f3e981" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 20:16:54 crc kubenswrapper[4932]: E0218 20:16:54.096575 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43cf3e74-b4e7-4f54-b21c-cf9018235782" containerName="collect-profiles" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.096583 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="43cf3e74-b4e7-4f54-b21c-cf9018235782" containerName="collect-profiles" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.096821 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af71c97-85dc-46f5-9fe0-7e4827f3e981" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.096847 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="43cf3e74-b4e7-4f54-b21c-cf9018235782" containerName="collect-profiles" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.097735 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.100114 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.100821 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.101433 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.101455 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.101681 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.107811 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv"] Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.165014 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.165523 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.165663 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.165799 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.165935 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.166023 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.166261 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbg4q\" (UniqueName: \"kubernetes.io/projected/438e3417-67a9-417c-9e75-d0e207ab1812-kube-api-access-nbg4q\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.268646 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.268719 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.268747 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.268773 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.268821 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.268843 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.268903 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbg4q\" (UniqueName: \"kubernetes.io/projected/438e3417-67a9-417c-9e75-d0e207ab1812-kube-api-access-nbg4q\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.274508 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.274530 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.274939 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.275095 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.275322 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.277110 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.286969 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbg4q\" (UniqueName: \"kubernetes.io/projected/438e3417-67a9-417c-9e75-d0e207ab1812-kube-api-access-nbg4q\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.415501 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.943079 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv"] Feb 18 20:16:55 crc kubenswrapper[4932]: I0218 20:16:55.917638 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" event={"ID":"438e3417-67a9-417c-9e75-d0e207ab1812","Type":"ContainerStarted","Data":"0ff3dd8901a960fc64101e349f14f625d265537feec7a85d08ecad6a7f58dcfd"} Feb 18 20:16:55 crc kubenswrapper[4932]: I0218 20:16:55.918078 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" event={"ID":"438e3417-67a9-417c-9e75-d0e207ab1812","Type":"ContainerStarted","Data":"9fedb3dcc828bf71bbbf9cbf690773fa436b77d77990e905ec72e0916360e9dd"} Feb 18 20:16:55 crc kubenswrapper[4932]: I0218 20:16:55.940935 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" podStartSLOduration=1.486760805 podStartE2EDuration="1.94091338s" podCreationTimestamp="2026-02-18 20:16:54 +0000 UTC" firstStartedPulling="2026-02-18 20:16:54.957243852 +0000 UTC m=+2578.539198697" lastFinishedPulling="2026-02-18 20:16:55.411396427 +0000 UTC m=+2578.993351272" observedRunningTime="2026-02-18 20:16:55.937048025 +0000 UTC m=+2579.519002870" watchObservedRunningTime="2026-02-18 20:16:55.94091338 +0000 UTC m=+2579.522868225" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.661157 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wmn86"] Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.663817 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.673918 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wmn86"] Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.833321 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-utilities\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.833382 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-catalog-content\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.833423 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zfr\" (UniqueName: \"kubernetes.io/projected/bccb4e09-25d0-498e-92d0-dac8572db926-kube-api-access-r5zfr\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.936030 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-catalog-content\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.936093 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zfr\" (UniqueName: \"kubernetes.io/projected/bccb4e09-25d0-498e-92d0-dac8572db926-kube-api-access-r5zfr\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.936248 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-utilities\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.936536 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-catalog-content\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.936855 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-utilities\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.957034 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zfr\" (UniqueName: \"kubernetes.io/projected/bccb4e09-25d0-498e-92d0-dac8572db926-kube-api-access-r5zfr\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:51 crc kubenswrapper[4932]: I0218 20:17:51.007408 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:51 crc kubenswrapper[4932]: I0218 20:17:51.530373 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wmn86"] Feb 18 20:17:51 crc kubenswrapper[4932]: I0218 20:17:51.567519 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmn86" event={"ID":"bccb4e09-25d0-498e-92d0-dac8572db926","Type":"ContainerStarted","Data":"c491639733116403151eda258c62a2eee151de18e5318d854dd76fc4c4f42d9a"} Feb 18 20:17:52 crc kubenswrapper[4932]: I0218 20:17:52.581125 4932 generic.go:334] "Generic (PLEG): container finished" podID="bccb4e09-25d0-498e-92d0-dac8572db926" containerID="67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db" exitCode=0 Feb 18 20:17:52 crc kubenswrapper[4932]: I0218 20:17:52.581220 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmn86" event={"ID":"bccb4e09-25d0-498e-92d0-dac8572db926","Type":"ContainerDied","Data":"67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db"} Feb 18 20:17:53 crc kubenswrapper[4932]: I0218 20:17:53.592705 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmn86" event={"ID":"bccb4e09-25d0-498e-92d0-dac8572db926","Type":"ContainerStarted","Data":"9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b"} Feb 18 20:17:54 crc kubenswrapper[4932]: I0218 20:17:54.604492 4932 generic.go:334] "Generic (PLEG): container finished" podID="bccb4e09-25d0-498e-92d0-dac8572db926" containerID="9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b" exitCode=0 Feb 18 20:17:54 crc kubenswrapper[4932]: I0218 20:17:54.604558 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmn86" event={"ID":"bccb4e09-25d0-498e-92d0-dac8572db926","Type":"ContainerDied","Data":"9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b"} Feb 18 20:17:55 crc kubenswrapper[4932]: I0218 20:17:55.623491 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmn86" event={"ID":"bccb4e09-25d0-498e-92d0-dac8572db926","Type":"ContainerStarted","Data":"6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419"} Feb 18 20:17:55 crc kubenswrapper[4932]: I0218 20:17:55.684277 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wmn86" podStartSLOduration=3.255925645 podStartE2EDuration="5.684245054s" podCreationTimestamp="2026-02-18 20:17:50 +0000 UTC" firstStartedPulling="2026-02-18 20:17:52.583563062 +0000 UTC m=+2636.165517947" lastFinishedPulling="2026-02-18 20:17:55.011882461 +0000 UTC m=+2638.593837356" observedRunningTime="2026-02-18 20:17:55.673490198 +0000 UTC m=+2639.255445063" watchObservedRunningTime="2026-02-18 20:17:55.684245054 +0000 UTC m=+2639.266199939" Feb 18 20:18:01 crc kubenswrapper[4932]: I0218 20:18:01.008529 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:18:01 crc kubenswrapper[4932]: I0218 20:18:01.013037 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:18:01 crc kubenswrapper[4932]: I0218 20:18:01.089228 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:18:01 crc kubenswrapper[4932]: I0218 20:18:01.771781 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:18:01 crc kubenswrapper[4932]: I0218 20:18:01.841650 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wmn86"] Feb 18 20:18:03 crc kubenswrapper[4932]: I0218 20:18:03.715130 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wmn86" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="registry-server" containerID="cri-o://6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419" gracePeriod=2 Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.219994 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.370101 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-utilities\") pod \"bccb4e09-25d0-498e-92d0-dac8572db926\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.370592 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-catalog-content\") pod \"bccb4e09-25d0-498e-92d0-dac8572db926\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.371527 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-utilities" (OuterVolumeSpecName: "utilities") pod "bccb4e09-25d0-498e-92d0-dac8572db926" (UID: "bccb4e09-25d0-498e-92d0-dac8572db926"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.375449 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5zfr\" (UniqueName: \"kubernetes.io/projected/bccb4e09-25d0-498e-92d0-dac8572db926-kube-api-access-r5zfr\") pod \"bccb4e09-25d0-498e-92d0-dac8572db926\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.376127 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.385513 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bccb4e09-25d0-498e-92d0-dac8572db926-kube-api-access-r5zfr" (OuterVolumeSpecName: "kube-api-access-r5zfr") pod "bccb4e09-25d0-498e-92d0-dac8572db926" (UID: "bccb4e09-25d0-498e-92d0-dac8572db926"). InnerVolumeSpecName "kube-api-access-r5zfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.442999 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bccb4e09-25d0-498e-92d0-dac8572db926" (UID: "bccb4e09-25d0-498e-92d0-dac8572db926"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.477381 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.477412 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5zfr\" (UniqueName: \"kubernetes.io/projected/bccb4e09-25d0-498e-92d0-dac8572db926-kube-api-access-r5zfr\") on node \"crc\" DevicePath \"\"" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.730446 4932 generic.go:334] "Generic (PLEG): container finished" podID="bccb4e09-25d0-498e-92d0-dac8572db926" containerID="6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419" exitCode=0 Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.730499 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmn86" event={"ID":"bccb4e09-25d0-498e-92d0-dac8572db926","Type":"ContainerDied","Data":"6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419"} Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.730535 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmn86" event={"ID":"bccb4e09-25d0-498e-92d0-dac8572db926","Type":"ContainerDied","Data":"c491639733116403151eda258c62a2eee151de18e5318d854dd76fc4c4f42d9a"} Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.730583 4932 scope.go:117] "RemoveContainer" containerID="6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.731367 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.799604 4932 scope.go:117] "RemoveContainer" containerID="9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.810566 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wmn86"] Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.824243 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wmn86"] Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.824790 4932 scope.go:117] "RemoveContainer" containerID="67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.891233 4932 scope.go:117] "RemoveContainer" containerID="6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419" Feb 18 20:18:04 crc kubenswrapper[4932]: E0218 20:18:04.892626 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419\": container with ID starting with 6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419 not found: ID does not exist" containerID="6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.892663 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419"} err="failed to get container status \"6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419\": rpc error: code = NotFound desc = could not find container \"6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419\": container with ID starting with 6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419 not found: ID does not exist" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.892685 4932 scope.go:117] "RemoveContainer" containerID="9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b" Feb 18 20:18:04 crc kubenswrapper[4932]: E0218 20:18:04.893438 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b\": container with ID starting with 9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b not found: ID does not exist" containerID="9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.893465 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b"} err="failed to get container status \"9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b\": rpc error: code = NotFound desc = could not find container \"9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b\": container with ID starting with 9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b not found: ID does not exist" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.893479 4932 scope.go:117] "RemoveContainer" containerID="67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db" Feb 18 20:18:04 crc kubenswrapper[4932]: E0218 20:18:04.893781 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db\": container with ID starting with 67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db not found: ID does not exist" containerID="67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.893803 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db"} err="failed to get container status \"67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db\": rpc error: code = NotFound desc = could not find container \"67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db\": container with ID starting with 67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db not found: ID does not exist" Feb 18 20:18:05 crc kubenswrapper[4932]: I0218 20:18:05.193881 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" path="/var/lib/kubelet/pods/bccb4e09-25d0-498e-92d0-dac8572db926/volumes" Feb 18 20:18:57 crc kubenswrapper[4932]: I0218 20:18:57.607311 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:18:57 crc kubenswrapper[4932]: I0218 20:18:57.608364 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:18:59 crc kubenswrapper[4932]: I0218 20:18:59.380263 4932 generic.go:334] "Generic (PLEG): container finished" podID="438e3417-67a9-417c-9e75-d0e207ab1812" containerID="0ff3dd8901a960fc64101e349f14f625d265537feec7a85d08ecad6a7f58dcfd" exitCode=0 Feb 18 20:18:59 crc kubenswrapper[4932]: I0218 20:18:59.380372 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" event={"ID":"438e3417-67a9-417c-9e75-d0e207ab1812","Type":"ContainerDied","Data":"0ff3dd8901a960fc64101e349f14f625d265537feec7a85d08ecad6a7f58dcfd"} Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.892196 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.981297 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-1\") pod \"438e3417-67a9-417c-9e75-d0e207ab1812\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.981337 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-0\") pod \"438e3417-67a9-417c-9e75-d0e207ab1812\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.981404 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-2\") pod \"438e3417-67a9-417c-9e75-d0e207ab1812\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.981522 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ssh-key-openstack-edpm-ipam\") pod \"438e3417-67a9-417c-9e75-d0e207ab1812\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.981583 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-inventory\") pod \"438e3417-67a9-417c-9e75-d0e207ab1812\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.981681 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-telemetry-combined-ca-bundle\") pod \"438e3417-67a9-417c-9e75-d0e207ab1812\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.981707 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbg4q\" (UniqueName: \"kubernetes.io/projected/438e3417-67a9-417c-9e75-d0e207ab1812-kube-api-access-nbg4q\") pod \"438e3417-67a9-417c-9e75-d0e207ab1812\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.987620 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438e3417-67a9-417c-9e75-d0e207ab1812-kube-api-access-nbg4q" (OuterVolumeSpecName: "kube-api-access-nbg4q") pod "438e3417-67a9-417c-9e75-d0e207ab1812" (UID: "438e3417-67a9-417c-9e75-d0e207ab1812"). InnerVolumeSpecName "kube-api-access-nbg4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.990486 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "438e3417-67a9-417c-9e75-d0e207ab1812" (UID: "438e3417-67a9-417c-9e75-d0e207ab1812"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.013315 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-inventory" (OuterVolumeSpecName: "inventory") pod "438e3417-67a9-417c-9e75-d0e207ab1812" (UID: "438e3417-67a9-417c-9e75-d0e207ab1812"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.015072 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "438e3417-67a9-417c-9e75-d0e207ab1812" (UID: "438e3417-67a9-417c-9e75-d0e207ab1812"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.024632 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "438e3417-67a9-417c-9e75-d0e207ab1812" (UID: "438e3417-67a9-417c-9e75-d0e207ab1812"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.029593 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "438e3417-67a9-417c-9e75-d0e207ab1812" (UID: "438e3417-67a9-417c-9e75-d0e207ab1812"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.041334 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "438e3417-67a9-417c-9e75-d0e207ab1812" (UID: "438e3417-67a9-417c-9e75-d0e207ab1812"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.084662 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.084719 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.084739 4932 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.084760 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbg4q\" (UniqueName: \"kubernetes.io/projected/438e3417-67a9-417c-9e75-d0e207ab1812-kube-api-access-nbg4q\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.084780 4932 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.084800 4932 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.084821 4932 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.406474 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" event={"ID":"438e3417-67a9-417c-9e75-d0e207ab1812","Type":"ContainerDied","Data":"9fedb3dcc828bf71bbbf9cbf690773fa436b77d77990e905ec72e0916360e9dd"} Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.406552 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fedb3dcc828bf71bbbf9cbf690773fa436b77d77990e905ec72e0916360e9dd" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.406657 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.606613 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.607289 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.834716 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x4fhd"] Feb 18 20:19:27 crc kubenswrapper[4932]: E0218 20:19:27.835307 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438e3417-67a9-417c-9e75-d0e207ab1812" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.835333 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="438e3417-67a9-417c-9e75-d0e207ab1812" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:19:27 crc kubenswrapper[4932]: E0218 20:19:27.835356 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="registry-server" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.835365 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="registry-server" Feb 18 20:19:27 crc kubenswrapper[4932]: E0218 20:19:27.835410 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="extract-utilities" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.835420 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="extract-utilities" Feb 18 20:19:27 crc kubenswrapper[4932]: E0218 20:19:27.835432 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="extract-content" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.835440 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="extract-content" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.835682 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="438e3417-67a9-417c-9e75-d0e207ab1812" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.835702 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="registry-server" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.837207 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.857649 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4fhd"] Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.880220 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tl7q\" (UniqueName: \"kubernetes.io/projected/9c726726-9ae9-4956-9999-09c956029615-kube-api-access-9tl7q\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.880300 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-utilities\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.881838 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-catalog-content\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.984256 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-catalog-content\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.984339 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tl7q\" (UniqueName: \"kubernetes.io/projected/9c726726-9ae9-4956-9999-09c956029615-kube-api-access-9tl7q\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.984373 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-utilities\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.984875 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-catalog-content\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.985336 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-utilities\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:28 crc kubenswrapper[4932]: I0218 20:19:28.004300 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tl7q\" (UniqueName: \"kubernetes.io/projected/9c726726-9ae9-4956-9999-09c956029615-kube-api-access-9tl7q\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:28 crc kubenswrapper[4932]: I0218 20:19:28.181451 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:28 crc kubenswrapper[4932]: I0218 20:19:28.774131 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4fhd"] Feb 18 20:19:29 crc kubenswrapper[4932]: I0218 20:19:29.729063 4932 generic.go:334] "Generic (PLEG): container finished" podID="9c726726-9ae9-4956-9999-09c956029615" containerID="d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc" exitCode=0 Feb 18 20:19:29 crc kubenswrapper[4932]: I0218 20:19:29.729132 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4fhd" event={"ID":"9c726726-9ae9-4956-9999-09c956029615","Type":"ContainerDied","Data":"d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc"} Feb 18 20:19:29 crc kubenswrapper[4932]: I0218 20:19:29.729834 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4fhd" event={"ID":"9c726726-9ae9-4956-9999-09c956029615","Type":"ContainerStarted","Data":"e093f1cca327bf041c93f56c487c61e59e1e403678b96164f3bbb1c6097b672a"} Feb 18 20:19:29 crc kubenswrapper[4932]: I0218 20:19:29.731861 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:19:31 crc kubenswrapper[4932]: I0218 20:19:31.765420 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4fhd" event={"ID":"9c726726-9ae9-4956-9999-09c956029615","Type":"ContainerStarted","Data":"c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f"} Feb 18 20:19:34 crc kubenswrapper[4932]: I0218 20:19:34.794168 4932 generic.go:334] "Generic (PLEG): container finished" podID="9c726726-9ae9-4956-9999-09c956029615" containerID="c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f" exitCode=0 Feb 18 20:19:34 crc kubenswrapper[4932]: I0218 20:19:34.794213 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4fhd" event={"ID":"9c726726-9ae9-4956-9999-09c956029615","Type":"ContainerDied","Data":"c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f"} Feb 18 20:19:35 crc kubenswrapper[4932]: I0218 20:19:35.808559 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4fhd" event={"ID":"9c726726-9ae9-4956-9999-09c956029615","Type":"ContainerStarted","Data":"ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a"} Feb 18 20:19:35 crc kubenswrapper[4932]: I0218 20:19:35.835617 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x4fhd" podStartSLOduration=3.340472521 podStartE2EDuration="8.835597975s" podCreationTimestamp="2026-02-18 20:19:27 +0000 UTC" firstStartedPulling="2026-02-18 20:19:29.731595861 +0000 UTC m=+2733.313550706" lastFinishedPulling="2026-02-18 20:19:35.226721315 +0000 UTC m=+2738.808676160" observedRunningTime="2026-02-18 20:19:35.829677619 +0000 UTC m=+2739.411632464" watchObservedRunningTime="2026-02-18 20:19:35.835597975 +0000 UTC m=+2739.417552820" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.091407 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.093914 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.095569 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.107445 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176057 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176101 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176126 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvp8\" (UniqueName: \"kubernetes.io/projected/80782f4b-1aed-46fc-9400-896d1a9d02f7-kube-api-access-6vvp8\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176151 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176411 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176520 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-scripts\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176552 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-sys\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176615 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-dev\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176734 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-nvme\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176787 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176813 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-config-data-custom\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176855 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-config-data\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176900 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-run\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.177064 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-lib-modules\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.177102 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.206073 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.208145 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.210462 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.216813 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.244056 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.245973 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.247962 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.272181 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279380 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279484 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-run\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279541 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279584 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h488t\" (UniqueName: \"kubernetes.io/projected/454e91b9-5fe5-445a-ae9d-372899613515-kube-api-access-h488t\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279609 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-sys\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279628 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279654 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279686 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279725 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-lib-modules\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279759 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279807 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-lib-modules\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279843 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279874 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279913 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279942 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279968 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279991 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280016 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280216 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvp8\" (UniqueName: \"kubernetes.io/projected/80782f4b-1aed-46fc-9400-896d1a9d02f7-kube-api-access-6vvp8\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280277 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280306 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280349 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280378 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280416 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280447 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280457 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280502 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcfn6\" (UniqueName: \"kubernetes.io/projected/74b2e4dc-d3d9-4aa4-8255-6c174d925528-kube-api-access-fcfn6\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280541 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280576 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280598 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280621 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280644 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280660 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-scripts\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280693 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-sys\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280719 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280744 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-dev\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280776 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-dev\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-sys\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280856 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280860 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-dev\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280911 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280958 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-nvme\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280989 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281012 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-config-data-custom\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281033 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281072 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-config-data\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281103 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281125 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281160 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281204 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281250 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-run\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281273 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281368 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281620 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-run\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281907 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-nvme\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.292593 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-config-data-custom\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.293428 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-scripts\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.300266 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.300378 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-config-data\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.304830 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvp8\" (UniqueName: \"kubernetes.io/projected/80782f4b-1aed-46fc-9400-896d1a9d02f7-kube-api-access-6vvp8\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.382817 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h488t\" (UniqueName: \"kubernetes.io/projected/454e91b9-5fe5-445a-ae9d-372899613515-kube-api-access-h488t\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.382870 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-sys\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.382887 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.382912 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.382939 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.382985 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-sys\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383050 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383059 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.382999 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383078 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383229 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383309 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383331 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383347 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383409 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383410 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383458 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383477 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383535 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383469 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383569 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383615 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383629 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383706 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcfn6\" (UniqueName: \"kubernetes.io/projected/74b2e4dc-d3d9-4aa4-8255-6c174d925528-kube-api-access-fcfn6\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383763 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383785 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383817 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383880 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383921 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-dev\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383962 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383999 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384068 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384108 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384128 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384167 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384277 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384327 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384349 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384360 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-dev\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384393 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384401 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-run\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384427 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-run\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384450 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384470 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384480 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384481 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384594 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384640 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384684 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384822 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.388568 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.388630 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.389163 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.389378 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.389981 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.391719 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.393929 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.394012 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.401036 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcfn6\" (UniqueName: \"kubernetes.io/projected/74b2e4dc-d3d9-4aa4-8255-6c174d925528-kube-api-access-fcfn6\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.409039 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h488t\" (UniqueName: \"kubernetes.io/projected/454e91b9-5fe5-445a-ae9d-372899613515-kube-api-access-h488t\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.415898 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.527668 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.571645 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.102798 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 18 20:19:37 crc kubenswrapper[4932]: W0218 20:19:37.107662 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80782f4b_1aed_46fc_9400_896d1a9d02f7.slice/crio-7600579eb06dbf3a54ed89c97273a883fc381368ad7b22d6b7c5503b013da031 WatchSource:0}: Error finding container 7600579eb06dbf3a54ed89c97273a883fc381368ad7b22d6b7c5503b013da031: Status 404 returned error can't find the container with id 7600579eb06dbf3a54ed89c97273a883fc381368ad7b22d6b7c5503b013da031 Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.217096 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.459349 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 18 20:19:37 crc kubenswrapper[4932]: W0218 20:19:37.537303 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod454e91b9_5fe5_445a_ae9d_372899613515.slice/crio-bc5455a3dd4da42eccaa14fb360b24aa5ca0d2f5bf331702de4f093df582627e WatchSource:0}: Error finding container bc5455a3dd4da42eccaa14fb360b24aa5ca0d2f5bf331702de4f093df582627e: Status 404 returned error can't find the container with id bc5455a3dd4da42eccaa14fb360b24aa5ca0d2f5bf331702de4f093df582627e Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.842209 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"454e91b9-5fe5-445a-ae9d-372899613515","Type":"ContainerStarted","Data":"b95971177658ba52d642e7238e4ecee980abb72853fc31ea24786cacecafdc5d"} Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.842447 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"454e91b9-5fe5-445a-ae9d-372899613515","Type":"ContainerStarted","Data":"bc5455a3dd4da42eccaa14fb360b24aa5ca0d2f5bf331702de4f093df582627e"} Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.844334 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"74b2e4dc-d3d9-4aa4-8255-6c174d925528","Type":"ContainerStarted","Data":"e8531dd854d16aa76856d0d54c9e186cb1fa0a9db5a075b1d3eab6ff2c38e1f1"} Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.844604 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"74b2e4dc-d3d9-4aa4-8255-6c174d925528","Type":"ContainerStarted","Data":"d420ab5610f4ea8287899e3183abf991cb38a3378a3b16a21774a88c82218f0c"} Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.846594 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"80782f4b-1aed-46fc-9400-896d1a9d02f7","Type":"ContainerStarted","Data":"b951723a7b1973f640ad8f3ac8bb268f14dee1538c422983e04afa8b026c38aa"} Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.846613 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"80782f4b-1aed-46fc-9400-896d1a9d02f7","Type":"ContainerStarted","Data":"7600579eb06dbf3a54ed89c97273a883fc381368ad7b22d6b7c5503b013da031"} Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.182395 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.182630 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.856551 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"74b2e4dc-d3d9-4aa4-8255-6c174d925528","Type":"ContainerStarted","Data":"f26521e0869587a39b8f76ff63c77c5a07c355f949471d009ae4d05f01d5f49b"} Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.858821 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"80782f4b-1aed-46fc-9400-896d1a9d02f7","Type":"ContainerStarted","Data":"7e7efa80400fba75484b6234daf43d2a360d3281c96eea0e23c099be7bc8fa7e"} Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.860815 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"454e91b9-5fe5-445a-ae9d-372899613515","Type":"ContainerStarted","Data":"d146e8fc018c30c8e8d2da8c4f65b77944594efe79a7bb710ac1fcae182a389e"} Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.885580 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.664174417 podStartE2EDuration="2.885561532s" podCreationTimestamp="2026-02-18 20:19:36 +0000 UTC" firstStartedPulling="2026-02-18 20:19:37.361364894 +0000 UTC m=+2740.943319739" lastFinishedPulling="2026-02-18 20:19:37.582752009 +0000 UTC m=+2741.164706854" observedRunningTime="2026-02-18 20:19:38.877443721 +0000 UTC m=+2742.459398566" watchObservedRunningTime="2026-02-18 20:19:38.885561532 +0000 UTC m=+2742.467516377" Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.902280 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.648682123 podStartE2EDuration="2.902258665s" podCreationTimestamp="2026-02-18 20:19:36 +0000 UTC" firstStartedPulling="2026-02-18 20:19:37.110748135 +0000 UTC m=+2740.692702980" lastFinishedPulling="2026-02-18 20:19:37.364324677 +0000 UTC m=+2740.946279522" observedRunningTime="2026-02-18 20:19:38.897397535 +0000 UTC m=+2742.479352380" watchObservedRunningTime="2026-02-18 20:19:38.902258665 +0000 UTC m=+2742.484213510" Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.921480 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=2.858006451 podStartE2EDuration="2.92146169s" podCreationTimestamp="2026-02-18 20:19:36 +0000 UTC" firstStartedPulling="2026-02-18 20:19:37.551309482 +0000 UTC m=+2741.133264327" lastFinishedPulling="2026-02-18 20:19:37.614764721 +0000 UTC m=+2741.196719566" observedRunningTime="2026-02-18 20:19:38.917587694 +0000 UTC m=+2742.499542559" watchObservedRunningTime="2026-02-18 20:19:38.92146169 +0000 UTC m=+2742.503416525" Feb 18 20:19:39 crc kubenswrapper[4932]: I0218 20:19:39.236454 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4fhd" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="registry-server" probeResult="failure" output=< Feb 18 20:19:39 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 20:19:39 crc kubenswrapper[4932]: > Feb 18 20:19:41 crc kubenswrapper[4932]: I0218 20:19:41.416920 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 18 20:19:41 crc kubenswrapper[4932]: I0218 20:19:41.528737 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:41 crc kubenswrapper[4932]: I0218 20:19:41.572588 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.075069 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hwssw"] Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.079248 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.092042 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hwssw"] Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.134102 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-utilities\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.134144 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slpdf\" (UniqueName: \"kubernetes.io/projected/98eb30d5-e437-4090-a44b-84245137fb3c-kube-api-access-slpdf\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.134220 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-catalog-content\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.239479 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-catalog-content\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.239752 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-utilities\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.239786 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slpdf\" (UniqueName: \"kubernetes.io/projected/98eb30d5-e437-4090-a44b-84245137fb3c-kube-api-access-slpdf\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.241247 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-catalog-content\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.242830 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-utilities\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.275519 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slpdf\" (UniqueName: \"kubernetes.io/projected/98eb30d5-e437-4090-a44b-84245137fb3c-kube-api-access-slpdf\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.447246 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.880504 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hwssw"] Feb 18 20:19:43 crc kubenswrapper[4932]: I0218 20:19:43.909815 4932 generic.go:334] "Generic (PLEG): container finished" podID="98eb30d5-e437-4090-a44b-84245137fb3c" containerID="864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1" exitCode=0 Feb 18 20:19:43 crc kubenswrapper[4932]: I0218 20:19:43.909907 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwssw" event={"ID":"98eb30d5-e437-4090-a44b-84245137fb3c","Type":"ContainerDied","Data":"864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1"} Feb 18 20:19:43 crc kubenswrapper[4932]: I0218 20:19:43.910408 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwssw" event={"ID":"98eb30d5-e437-4090-a44b-84245137fb3c","Type":"ContainerStarted","Data":"91192c666182a564538f31414848540c74e1395813e0c0300c2862c80cb37cb2"} Feb 18 20:19:44 crc kubenswrapper[4932]: I0218 20:19:44.920247 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwssw" event={"ID":"98eb30d5-e437-4090-a44b-84245137fb3c","Type":"ContainerStarted","Data":"0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc"} Feb 18 20:19:45 crc kubenswrapper[4932]: I0218 20:19:45.941417 4932 generic.go:334] "Generic (PLEG): container finished" podID="98eb30d5-e437-4090-a44b-84245137fb3c" containerID="0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc" exitCode=0 Feb 18 20:19:45 crc kubenswrapper[4932]: I0218 20:19:45.941470 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwssw" event={"ID":"98eb30d5-e437-4090-a44b-84245137fb3c","Type":"ContainerDied","Data":"0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc"} Feb 18 20:19:46 crc kubenswrapper[4932]: I0218 20:19:46.590885 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 18 20:19:46 crc kubenswrapper[4932]: I0218 20:19:46.740225 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:46 crc kubenswrapper[4932]: I0218 20:19:46.903416 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:47 crc kubenswrapper[4932]: I0218 20:19:47.969842 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwssw" event={"ID":"98eb30d5-e437-4090-a44b-84245137fb3c","Type":"ContainerStarted","Data":"50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736"} Feb 18 20:19:47 crc kubenswrapper[4932]: I0218 20:19:47.992148 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hwssw" podStartSLOduration=2.545072834 podStartE2EDuration="5.992125633s" podCreationTimestamp="2026-02-18 20:19:42 +0000 UTC" firstStartedPulling="2026-02-18 20:19:43.913026831 +0000 UTC m=+2747.494981676" lastFinishedPulling="2026-02-18 20:19:47.36007963 +0000 UTC m=+2750.942034475" observedRunningTime="2026-02-18 20:19:47.988497033 +0000 UTC m=+2751.570451878" watchObservedRunningTime="2026-02-18 20:19:47.992125633 +0000 UTC m=+2751.574080498" Feb 18 20:19:49 crc kubenswrapper[4932]: I0218 20:19:49.231895 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4fhd" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="registry-server" probeResult="failure" output=< Feb 18 20:19:49 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 20:19:49 crc kubenswrapper[4932]: > Feb 18 20:19:52 crc kubenswrapper[4932]: I0218 20:19:52.447756 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:52 crc kubenswrapper[4932]: I0218 20:19:52.448816 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:52 crc kubenswrapper[4932]: I0218 20:19:52.504970 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:53 crc kubenswrapper[4932]: I0218 20:19:53.086585 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:53 crc kubenswrapper[4932]: I0218 20:19:53.136038 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hwssw"] Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.058616 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hwssw" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="registry-server" containerID="cri-o://50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736" gracePeriod=2 Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.612385 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.715530 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slpdf\" (UniqueName: \"kubernetes.io/projected/98eb30d5-e437-4090-a44b-84245137fb3c-kube-api-access-slpdf\") pod \"98eb30d5-e437-4090-a44b-84245137fb3c\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.715628 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-utilities\") pod \"98eb30d5-e437-4090-a44b-84245137fb3c\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.715657 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-catalog-content\") pod \"98eb30d5-e437-4090-a44b-84245137fb3c\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.716895 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-utilities" (OuterVolumeSpecName: "utilities") pod "98eb30d5-e437-4090-a44b-84245137fb3c" (UID: "98eb30d5-e437-4090-a44b-84245137fb3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.722860 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98eb30d5-e437-4090-a44b-84245137fb3c-kube-api-access-slpdf" (OuterVolumeSpecName: "kube-api-access-slpdf") pod "98eb30d5-e437-4090-a44b-84245137fb3c" (UID: "98eb30d5-e437-4090-a44b-84245137fb3c"). InnerVolumeSpecName "kube-api-access-slpdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.776293 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98eb30d5-e437-4090-a44b-84245137fb3c" (UID: "98eb30d5-e437-4090-a44b-84245137fb3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.817711 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slpdf\" (UniqueName: \"kubernetes.io/projected/98eb30d5-e437-4090-a44b-84245137fb3c-kube-api-access-slpdf\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.817741 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.817750 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.069333 4932 generic.go:334] "Generic (PLEG): container finished" podID="98eb30d5-e437-4090-a44b-84245137fb3c" containerID="50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736" exitCode=0 Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.069422 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.069415 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwssw" event={"ID":"98eb30d5-e437-4090-a44b-84245137fb3c","Type":"ContainerDied","Data":"50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736"} Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.070526 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwssw" event={"ID":"98eb30d5-e437-4090-a44b-84245137fb3c","Type":"ContainerDied","Data":"91192c666182a564538f31414848540c74e1395813e0c0300c2862c80cb37cb2"} Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.070653 4932 scope.go:117] "RemoveContainer" containerID="50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.098276 4932 scope.go:117] "RemoveContainer" containerID="0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.105371 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hwssw"] Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.116049 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hwssw"] Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.122164 4932 scope.go:117] "RemoveContainer" containerID="864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.179814 4932 scope.go:117] "RemoveContainer" containerID="50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736" Feb 18 20:19:56 crc kubenswrapper[4932]: E0218 20:19:56.180189 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736\": container with ID starting with 50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736 not found: ID does not exist" containerID="50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.180238 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736"} err="failed to get container status \"50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736\": rpc error: code = NotFound desc = could not find container \"50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736\": container with ID starting with 50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736 not found: ID does not exist" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.180271 4932 scope.go:117] "RemoveContainer" containerID="0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc" Feb 18 20:19:56 crc kubenswrapper[4932]: E0218 20:19:56.180676 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc\": container with ID starting with 0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc not found: ID does not exist" containerID="0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.180708 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc"} err="failed to get container status \"0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc\": rpc error: code = NotFound desc = could not find container \"0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc\": container with ID starting with 0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc not found: ID does not exist" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.180732 4932 scope.go:117] "RemoveContainer" containerID="864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1" Feb 18 20:19:56 crc kubenswrapper[4932]: E0218 20:19:56.181043 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1\": container with ID starting with 864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1 not found: ID does not exist" containerID="864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.181070 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1"} err="failed to get container status \"864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1\": rpc error: code = NotFound desc = could not find container \"864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1\": container with ID starting with 864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1 not found: ID does not exist" Feb 18 20:19:57 crc kubenswrapper[4932]: I0218 20:19:57.196962 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" path="/var/lib/kubelet/pods/98eb30d5-e437-4090-a44b-84245137fb3c/volumes" Feb 18 20:19:57 crc kubenswrapper[4932]: I0218 20:19:57.606578 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:19:57 crc kubenswrapper[4932]: I0218 20:19:57.606647 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:19:57 crc kubenswrapper[4932]: I0218 20:19:57.606700 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:19:57 crc kubenswrapper[4932]: I0218 20:19:57.607487 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d641d1880a050cdf1021a445fa79e88f90ca1f340fe0f38bc6a038f7b103aec"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:19:57 crc kubenswrapper[4932]: I0218 20:19:57.607543 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://0d641d1880a050cdf1021a445fa79e88f90ca1f340fe0f38bc6a038f7b103aec" gracePeriod=600 Feb 18 20:19:58 crc kubenswrapper[4932]: I0218 20:19:58.102711 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="0d641d1880a050cdf1021a445fa79e88f90ca1f340fe0f38bc6a038f7b103aec" exitCode=0 Feb 18 20:19:58 crc kubenswrapper[4932]: I0218 20:19:58.102778 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"0d641d1880a050cdf1021a445fa79e88f90ca1f340fe0f38bc6a038f7b103aec"} Feb 18 20:19:58 crc kubenswrapper[4932]: I0218 20:19:58.103446 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72"} Feb 18 20:19:58 crc kubenswrapper[4932]: I0218 20:19:58.103468 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:19:59 crc kubenswrapper[4932]: I0218 20:19:59.242407 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4fhd" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="registry-server" probeResult="failure" output=< Feb 18 20:19:59 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 20:19:59 crc kubenswrapper[4932]: > Feb 18 20:20:08 crc kubenswrapper[4932]: I0218 20:20:08.260739 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:20:08 crc kubenswrapper[4932]: I0218 20:20:08.346493 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.535199 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4fhd"] Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.775547 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8jx"] Feb 18 20:20:09 crc kubenswrapper[4932]: E0218 20:20:09.776140 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="registry-server" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.776195 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="registry-server" Feb 18 20:20:09 crc kubenswrapper[4932]: E0218 20:20:09.776221 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="extract-utilities" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.776229 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="extract-utilities" Feb 18 20:20:09 crc kubenswrapper[4932]: E0218 20:20:09.776264 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="extract-content" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.776303 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="extract-content" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.776564 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="registry-server" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.778390 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.798292 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8jx"] Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.845284 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-catalog-content\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.845634 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-utilities\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.845779 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7sds\" (UniqueName: \"kubernetes.io/projected/5411e325-db57-464b-b5cd-312b4dd719a6-kube-api-access-q7sds\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.947780 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-utilities\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.947889 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7sds\" (UniqueName: \"kubernetes.io/projected/5411e325-db57-464b-b5cd-312b4dd719a6-kube-api-access-q7sds\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.948075 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-catalog-content\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.948417 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-utilities\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.948468 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-catalog-content\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.973575 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7sds\" (UniqueName: \"kubernetes.io/projected/5411e325-db57-464b-b5cd-312b4dd719a6-kube-api-access-q7sds\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.111240 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.266464 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x4fhd" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="registry-server" containerID="cri-o://ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a" gracePeriod=2 Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.616729 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8jx"] Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.703560 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.769048 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-catalog-content\") pod \"9c726726-9ae9-4956-9999-09c956029615\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.769094 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-utilities\") pod \"9c726726-9ae9-4956-9999-09c956029615\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.769214 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tl7q\" (UniqueName: \"kubernetes.io/projected/9c726726-9ae9-4956-9999-09c956029615-kube-api-access-9tl7q\") pod \"9c726726-9ae9-4956-9999-09c956029615\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.769892 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-utilities" (OuterVolumeSpecName: "utilities") pod "9c726726-9ae9-4956-9999-09c956029615" (UID: "9c726726-9ae9-4956-9999-09c956029615"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.774596 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c726726-9ae9-4956-9999-09c956029615-kube-api-access-9tl7q" (OuterVolumeSpecName: "kube-api-access-9tl7q") pod "9c726726-9ae9-4956-9999-09c956029615" (UID: "9c726726-9ae9-4956-9999-09c956029615"). InnerVolumeSpecName "kube-api-access-9tl7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.871647 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tl7q\" (UniqueName: \"kubernetes.io/projected/9c726726-9ae9-4956-9999-09c956029615-kube-api-access-9tl7q\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.871684 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.880550 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c726726-9ae9-4956-9999-09c956029615" (UID: "9c726726-9ae9-4956-9999-09c956029615"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.973825 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.279682 4932 generic.go:334] "Generic (PLEG): container finished" podID="9c726726-9ae9-4956-9999-09c956029615" containerID="ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a" exitCode=0 Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.279752 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4fhd" event={"ID":"9c726726-9ae9-4956-9999-09c956029615","Type":"ContainerDied","Data":"ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a"} Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.279786 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.279804 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4fhd" event={"ID":"9c726726-9ae9-4956-9999-09c956029615","Type":"ContainerDied","Data":"e093f1cca327bf041c93f56c487c61e59e1e403678b96164f3bbb1c6097b672a"} Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.279836 4932 scope.go:117] "RemoveContainer" containerID="ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.285090 4932 generic.go:334] "Generic (PLEG): container finished" podID="5411e325-db57-464b-b5cd-312b4dd719a6" containerID="c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe" exitCode=0 Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.285142 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8jx" event={"ID":"5411e325-db57-464b-b5cd-312b4dd719a6","Type":"ContainerDied","Data":"c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe"} Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.285206 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8jx" event={"ID":"5411e325-db57-464b-b5cd-312b4dd719a6","Type":"ContainerStarted","Data":"1e58f58be7fdf29dc380a3f94ed9e5b8c8d93390baacd2a911ef7c5416afd603"} Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.322789 4932 scope.go:117] "RemoveContainer" containerID="c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.358754 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4fhd"] Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.368625 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x4fhd"] Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.372211 4932 scope.go:117] "RemoveContainer" containerID="d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.425853 4932 scope.go:117] "RemoveContainer" containerID="ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a" Feb 18 20:20:11 crc kubenswrapper[4932]: E0218 20:20:11.426353 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a\": container with ID starting with ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a not found: ID does not exist" containerID="ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.426417 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a"} err="failed to get container status \"ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a\": rpc error: code = NotFound desc = could not find container \"ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a\": container with ID starting with ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a not found: ID does not exist" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.426453 4932 scope.go:117] "RemoveContainer" containerID="c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f" Feb 18 20:20:11 crc kubenswrapper[4932]: E0218 20:20:11.427127 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f\": container with ID starting with c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f not found: ID does not exist" containerID="c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.427169 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f"} err="failed to get container status \"c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f\": rpc error: code = NotFound desc = could not find container \"c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f\": container with ID starting with c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f not found: ID does not exist" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.427213 4932 scope.go:117] "RemoveContainer" containerID="d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc" Feb 18 20:20:11 crc kubenswrapper[4932]: E0218 20:20:11.427673 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc\": container with ID starting with d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc not found: ID does not exist" containerID="d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.427700 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc"} err="failed to get container status \"d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc\": rpc error: code = NotFound desc = could not find container \"d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc\": container with ID starting with d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc not found: ID does not exist" Feb 18 20:20:12 crc kubenswrapper[4932]: I0218 20:20:12.301679 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8jx" event={"ID":"5411e325-db57-464b-b5cd-312b4dd719a6","Type":"ContainerStarted","Data":"75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53"} Feb 18 20:20:13 crc kubenswrapper[4932]: I0218 20:20:13.195075 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c726726-9ae9-4956-9999-09c956029615" path="/var/lib/kubelet/pods/9c726726-9ae9-4956-9999-09c956029615/volumes" Feb 18 20:20:13 crc kubenswrapper[4932]: I0218 20:20:13.315851 4932 generic.go:334] "Generic (PLEG): container finished" podID="5411e325-db57-464b-b5cd-312b4dd719a6" containerID="75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53" exitCode=0 Feb 18 20:20:13 crc kubenswrapper[4932]: I0218 20:20:13.315899 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8jx" event={"ID":"5411e325-db57-464b-b5cd-312b4dd719a6","Type":"ContainerDied","Data":"75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53"} Feb 18 20:20:14 crc kubenswrapper[4932]: I0218 20:20:14.332523 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8jx" event={"ID":"5411e325-db57-464b-b5cd-312b4dd719a6","Type":"ContainerStarted","Data":"f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b"} Feb 18 20:20:14 crc kubenswrapper[4932]: I0218 20:20:14.369842 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5g8jx" podStartSLOduration=2.878987376 podStartE2EDuration="5.369817444s" podCreationTimestamp="2026-02-18 20:20:09 +0000 UTC" firstStartedPulling="2026-02-18 20:20:11.289446054 +0000 UTC m=+2774.871400939" lastFinishedPulling="2026-02-18 20:20:13.780276132 +0000 UTC m=+2777.362231007" observedRunningTime="2026-02-18 20:20:14.360922644 +0000 UTC m=+2777.942877529" watchObservedRunningTime="2026-02-18 20:20:14.369817444 +0000 UTC m=+2777.951772299" Feb 18 20:20:20 crc kubenswrapper[4932]: I0218 20:20:20.111608 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:20 crc kubenswrapper[4932]: I0218 20:20:20.112155 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:20 crc kubenswrapper[4932]: I0218 20:20:20.203050 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:20 crc kubenswrapper[4932]: I0218 20:20:20.456751 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:20 crc kubenswrapper[4932]: I0218 20:20:20.511677 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8jx"] Feb 18 20:20:22 crc kubenswrapper[4932]: I0218 20:20:22.423969 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5g8jx" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="registry-server" containerID="cri-o://f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b" gracePeriod=2 Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:22.974568 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.070353 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-catalog-content\") pod \"5411e325-db57-464b-b5cd-312b4dd719a6\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.070467 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7sds\" (UniqueName: \"kubernetes.io/projected/5411e325-db57-464b-b5cd-312b4dd719a6-kube-api-access-q7sds\") pod \"5411e325-db57-464b-b5cd-312b4dd719a6\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.070539 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-utilities\") pod \"5411e325-db57-464b-b5cd-312b4dd719a6\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.071814 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-utilities" (OuterVolumeSpecName: "utilities") pod "5411e325-db57-464b-b5cd-312b4dd719a6" (UID: "5411e325-db57-464b-b5cd-312b4dd719a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.078581 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5411e325-db57-464b-b5cd-312b4dd719a6-kube-api-access-q7sds" (OuterVolumeSpecName: "kube-api-access-q7sds") pod "5411e325-db57-464b-b5cd-312b4dd719a6" (UID: "5411e325-db57-464b-b5cd-312b4dd719a6"). InnerVolumeSpecName "kube-api-access-q7sds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.098618 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5411e325-db57-464b-b5cd-312b4dd719a6" (UID: "5411e325-db57-464b-b5cd-312b4dd719a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.173104 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.173130 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.173140 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7sds\" (UniqueName: \"kubernetes.io/projected/5411e325-db57-464b-b5cd-312b4dd719a6-kube-api-access-q7sds\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.436834 4932 generic.go:334] "Generic (PLEG): container finished" podID="5411e325-db57-464b-b5cd-312b4dd719a6" containerID="f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b" exitCode=0 Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.436896 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.436917 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8jx" event={"ID":"5411e325-db57-464b-b5cd-312b4dd719a6","Type":"ContainerDied","Data":"f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b"} Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.437733 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8jx" event={"ID":"5411e325-db57-464b-b5cd-312b4dd719a6","Type":"ContainerDied","Data":"1e58f58be7fdf29dc380a3f94ed9e5b8c8d93390baacd2a911ef7c5416afd603"} Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.437773 4932 scope.go:117] "RemoveContainer" containerID="f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.468696 4932 scope.go:117] "RemoveContainer" containerID="75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.489256 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8jx"] Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.493641 4932 scope.go:117] "RemoveContainer" containerID="c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.501372 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8jx"] Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.559997 4932 scope.go:117] "RemoveContainer" containerID="f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b" Feb 18 20:20:23 crc kubenswrapper[4932]: E0218 20:20:23.560981 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b\": container with ID starting with f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b not found: ID does not exist" containerID="f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.561032 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b"} err="failed to get container status \"f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b\": rpc error: code = NotFound desc = could not find container \"f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b\": container with ID starting with f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b not found: ID does not exist" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.561066 4932 scope.go:117] "RemoveContainer" containerID="75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53" Feb 18 20:20:23 crc kubenswrapper[4932]: E0218 20:20:23.561504 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53\": container with ID starting with 75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53 not found: ID does not exist" containerID="75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.561524 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53"} err="failed to get container status \"75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53\": rpc error: code = NotFound desc = could not find container \"75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53\": container with ID starting with 75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53 not found: ID does not exist" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.561537 4932 scope.go:117] "RemoveContainer" containerID="c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe" Feb 18 20:20:23 crc kubenswrapper[4932]: E0218 20:20:23.561782 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe\": container with ID starting with c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe not found: ID does not exist" containerID="c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.561798 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe"} err="failed to get container status \"c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe\": rpc error: code = NotFound desc = could not find container \"c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe\": container with ID starting with c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe not found: ID does not exist" Feb 18 20:20:25 crc kubenswrapper[4932]: I0218 20:20:25.194635 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" path="/var/lib/kubelet/pods/5411e325-db57-464b-b5cd-312b4dd719a6/volumes" Feb 18 20:20:44 crc kubenswrapper[4932]: I0218 20:20:44.833758 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:20:44 crc kubenswrapper[4932]: I0218 20:20:44.834676 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="prometheus" containerID="cri-o://361657e74a3f41f1c11b35878117fcf352b08b255d1c2d6041c3ed746c1fd2c2" gracePeriod=600 Feb 18 20:20:44 crc kubenswrapper[4932]: I0218 20:20:44.834796 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="thanos-sidecar" containerID="cri-o://87593181676f68ce6f705683e7d0d7ac8f773d82d9f3858c223d1a3115fbc1c5" gracePeriod=600 Feb 18 20:20:44 crc kubenswrapper[4932]: I0218 20:20:44.834796 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="config-reloader" containerID="cri-o://d0b5bb5f9b3d94768e061de73d45369ab8df4d6880aaa6f295ec1ea349cbcc2b" gracePeriod=600 Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.708964 4932 generic.go:334] "Generic (PLEG): container finished" podID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerID="87593181676f68ce6f705683e7d0d7ac8f773d82d9f3858c223d1a3115fbc1c5" exitCode=0 Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.709249 4932 generic.go:334] "Generic (PLEG): container finished" podID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerID="d0b5bb5f9b3d94768e061de73d45369ab8df4d6880aaa6f295ec1ea349cbcc2b" exitCode=0 Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.709268 4932 generic.go:334] "Generic (PLEG): container finished" podID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerID="361657e74a3f41f1c11b35878117fcf352b08b255d1c2d6041c3ed746c1fd2c2" exitCode=0 Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.709051 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerDied","Data":"87593181676f68ce6f705683e7d0d7ac8f773d82d9f3858c223d1a3115fbc1c5"} Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.709305 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerDied","Data":"d0b5bb5f9b3d94768e061de73d45369ab8df4d6880aaa6f295ec1ea349cbcc2b"} Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.709318 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerDied","Data":"361657e74a3f41f1c11b35878117fcf352b08b255d1c2d6041c3ed746c1fd2c2"} Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.882611 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.932787 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-2\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.932873 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.932895 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-secret-combined-ca-bundle\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.932924 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-tls-assets\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.932993 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-config\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933036 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f1783f11-a79f-49d9-a637-224863cdb0ad-config-out\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933056 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-1\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933100 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933567 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933831 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933865 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-0\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933923 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-thanos-prometheus-http-client-file\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933984 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.934081 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnmwk\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-kube-api-access-fnmwk\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.934703 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.935621 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.935940 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.991418 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.991534 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.991658 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.991725 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1783f11-a79f-49d9-a637-224863cdb0ad-config-out" (OuterVolumeSpecName: "config-out") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.991750 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-kube-api-access-fnmwk" (OuterVolumeSpecName: "kube-api-access-fnmwk") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "kube-api-access-fnmwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.994343 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-config" (OuterVolumeSpecName: "config") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.995337 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.007311 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.040563 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079480 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079524 4932 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f1783f11-a79f-49d9-a637-224863cdb0ad-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079534 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079546 4932 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079575 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") on node \"crc\" " Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079586 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079595 4932 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079606 4932 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079616 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnmwk\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-kube-api-access-fnmwk\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079628 4932 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079640 4932 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.194440 4932 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.194592 4932 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69") on node "crc" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.205318 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config" (OuterVolumeSpecName: "web-config") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.287853 4932 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.287905 4932 reconciler_common.go:293] "Volume detached for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.722509 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerDied","Data":"517321ee2b5c108f37907af390aff2f58338e81a6d4f29d0b1fb1230f8840a63"} Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.722778 4932 scope.go:117] "RemoveContainer" containerID="87593181676f68ce6f705683e7d0d7ac8f773d82d9f3858c223d1a3115fbc1c5" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.722645 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.747492 4932 scope.go:117] "RemoveContainer" containerID="d0b5bb5f9b3d94768e061de73d45369ab8df4d6880aaa6f295ec1ea349cbcc2b" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.789047 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.791671 4932 scope.go:117] "RemoveContainer" containerID="361657e74a3f41f1c11b35878117fcf352b08b255d1c2d6041c3ed746c1fd2c2" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.797776 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.830999 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831425 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="registry-server" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831441 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="registry-server" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831454 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="registry-server" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831460 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="registry-server" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831472 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="init-config-reloader" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831480 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="init-config-reloader" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831489 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="extract-utilities" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831494 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="extract-utilities" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831506 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="config-reloader" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831511 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="config-reloader" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831524 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="prometheus" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831529 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="prometheus" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831546 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="extract-content" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831552 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="extract-content" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831566 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="thanos-sidecar" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831571 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="thanos-sidecar" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831581 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="extract-utilities" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831588 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="extract-utilities" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831611 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="extract-content" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831616 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="extract-content" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831777 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="thanos-sidecar" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831790 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="registry-server" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831804 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="config-reloader" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831817 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="prometheus" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831824 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="registry-server" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.834783 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.835523 4932 scope.go:117] "RemoveContainer" containerID="81f9d76b429826048a1f76e9841d9bd5c8224e1c54ca1834ee1d11eed8e3afa6" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.839107 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.839118 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.839390 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-5jcnf" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.839459 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.839406 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.839645 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.843084 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.846661 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.848629 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.004977 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/406a2738-127b-4d6d-8de4-3f5d88896b4c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005067 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005182 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005293 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005360 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005399 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005462 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/406a2738-127b-4d6d-8de4-3f5d88896b4c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005565 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-config\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005602 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005665 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005735 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005769 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpwcp\" (UniqueName: \"kubernetes.io/projected/406a2738-127b-4d6d-8de4-3f5d88896b4c-kube-api-access-dpwcp\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005862 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107220 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-config\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107290 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107344 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107388 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107422 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpwcp\" (UniqueName: \"kubernetes.io/projected/406a2738-127b-4d6d-8de4-3f5d88896b4c-kube-api-access-dpwcp\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107458 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107486 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/406a2738-127b-4d6d-8de4-3f5d88896b4c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107520 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107625 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107654 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107689 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107728 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/406a2738-127b-4d6d-8de4-3f5d88896b4c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.108746 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.109414 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.111350 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.114503 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.114914 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-config\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.118839 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/406a2738-127b-4d6d-8de4-3f5d88896b4c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.119272 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.119398 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.119489 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.119573 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/406a2738-127b-4d6d-8de4-3f5d88896b4c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.119725 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.121276 4932 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.121307 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e039419306e79ade7652e80c67474011a5658585fd3b39d0b236ffa94ab5d0db/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.131712 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpwcp\" (UniqueName: \"kubernetes.io/projected/406a2738-127b-4d6d-8de4-3f5d88896b4c-kube-api-access-dpwcp\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.176444 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.191128 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" path="/var/lib/kubelet/pods/f1783f11-a79f-49d9-a637-224863cdb0ad/volumes" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.218403 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.693802 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:20:47 crc kubenswrapper[4932]: W0218 20:20:47.698817 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod406a2738_127b_4d6d_8de4_3f5d88896b4c.slice/crio-2339bcc9c1d5fd8d404e3bf814a2e8e4d03d1d62d9d3e7919378f3a98010c483 WatchSource:0}: Error finding container 2339bcc9c1d5fd8d404e3bf814a2e8e4d03d1d62d9d3e7919378f3a98010c483: Status 404 returned error can't find the container with id 2339bcc9c1d5fd8d404e3bf814a2e8e4d03d1d62d9d3e7919378f3a98010c483 Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.741285 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"406a2738-127b-4d6d-8de4-3f5d88896b4c","Type":"ContainerStarted","Data":"2339bcc9c1d5fd8d404e3bf814a2e8e4d03d1d62d9d3e7919378f3a98010c483"} Feb 18 20:20:51 crc kubenswrapper[4932]: I0218 20:20:51.784754 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"406a2738-127b-4d6d-8de4-3f5d88896b4c","Type":"ContainerStarted","Data":"0efee390723588d24468eabdbf10c5a2ebb771b098b4b1e7b752a6aa1567498d"} Feb 18 20:21:00 crc kubenswrapper[4932]: I0218 20:21:00.889541 4932 generic.go:334] "Generic (PLEG): container finished" podID="406a2738-127b-4d6d-8de4-3f5d88896b4c" containerID="0efee390723588d24468eabdbf10c5a2ebb771b098b4b1e7b752a6aa1567498d" exitCode=0 Feb 18 20:21:00 crc kubenswrapper[4932]: I0218 20:21:00.889681 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"406a2738-127b-4d6d-8de4-3f5d88896b4c","Type":"ContainerDied","Data":"0efee390723588d24468eabdbf10c5a2ebb771b098b4b1e7b752a6aa1567498d"} Feb 18 20:21:01 crc kubenswrapper[4932]: I0218 20:21:01.902448 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"406a2738-127b-4d6d-8de4-3f5d88896b4c","Type":"ContainerStarted","Data":"d50ab465ae80ff5724a700d003136a7902824fcb39d6203d36f12ac40ffa0cad"} Feb 18 20:21:05 crc kubenswrapper[4932]: I0218 20:21:05.963356 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"406a2738-127b-4d6d-8de4-3f5d88896b4c","Type":"ContainerStarted","Data":"a79ef73103ea68b3eabdc169f566d82ef76b1733f0787fad30b43241c052da85"} Feb 18 20:21:05 crc kubenswrapper[4932]: I0218 20:21:05.963665 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"406a2738-127b-4d6d-8de4-3f5d88896b4c","Type":"ContainerStarted","Data":"0139950794a616f732ea759c1c7878a78bc2714d801bde2d8066230e81c5ffde"} Feb 18 20:21:06 crc kubenswrapper[4932]: I0218 20:21:06.011159 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.011137692 podStartE2EDuration="20.011137692s" podCreationTimestamp="2026-02-18 20:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:21:06.003320249 +0000 UTC m=+2829.585275134" watchObservedRunningTime="2026-02-18 20:21:06.011137692 +0000 UTC m=+2829.593092557" Feb 18 20:21:07 crc kubenswrapper[4932]: I0218 20:21:07.219553 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 20:21:17 crc kubenswrapper[4932]: I0218 20:21:17.218690 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 20:21:17 crc kubenswrapper[4932]: I0218 20:21:17.227913 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 20:21:18 crc kubenswrapper[4932]: I0218 20:21:18.139581 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.339272 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.341539 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.346014 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.346309 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bccj2" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.348799 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.349450 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.361406 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.467925 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.467977 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fhls\" (UniqueName: \"kubernetes.io/projected/2947758a-fd4b-4a4a-956a-41fefa7296a0-kube-api-access-7fhls\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.468039 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.468111 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.468162 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.468296 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.468323 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-config-data\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.468378 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.468443 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570678 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570707 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-config-data\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570755 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570823 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570867 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570894 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fhls\" (UniqueName: \"kubernetes.io/projected/2947758a-fd4b-4a4a-956a-41fefa7296a0-kube-api-access-7fhls\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570946 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570979 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.571037 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.571319 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.571579 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.572376 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-config-data\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.573101 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.577797 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.579359 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.579421 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.591117 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fhls\" (UniqueName: \"kubernetes.io/projected/2947758a-fd4b-4a4a-956a-41fefa7296a0-kube-api-access-7fhls\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.606982 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.668509 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:21:31 crc kubenswrapper[4932]: I0218 20:21:31.131106 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:21:31 crc kubenswrapper[4932]: I0218 20:21:31.298696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2947758a-fd4b-4a4a-956a-41fefa7296a0","Type":"ContainerStarted","Data":"8c283e884b8f80bf01f3a12151451c0769e806057cd6f8d4c57d644f30012eb1"} Feb 18 20:21:41 crc kubenswrapper[4932]: I0218 20:21:41.396801 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2947758a-fd4b-4a4a-956a-41fefa7296a0","Type":"ContainerStarted","Data":"b8cc57bfeb38d618854d30ad2a0303534b4a0674c797bd1d7dcd4db1e8159186"} Feb 18 20:21:41 crc kubenswrapper[4932]: I0218 20:21:41.431471 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.392070405 podStartE2EDuration="12.431451572s" podCreationTimestamp="2026-02-18 20:21:29 +0000 UTC" firstStartedPulling="2026-02-18 20:21:31.133567107 +0000 UTC m=+2854.715521952" lastFinishedPulling="2026-02-18 20:21:40.172948274 +0000 UTC m=+2863.754903119" observedRunningTime="2026-02-18 20:21:41.424036199 +0000 UTC m=+2865.005991104" watchObservedRunningTime="2026-02-18 20:21:41.431451572 +0000 UTC m=+2865.013406427" Feb 18 20:22:27 crc kubenswrapper[4932]: I0218 20:22:27.606134 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:22:27 crc kubenswrapper[4932]: I0218 20:22:27.606591 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:22:57 crc kubenswrapper[4932]: I0218 20:22:57.605910 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:22:57 crc kubenswrapper[4932]: I0218 20:22:57.606919 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:23:27 crc kubenswrapper[4932]: I0218 20:23:27.607349 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:23:27 crc kubenswrapper[4932]: I0218 20:23:27.608776 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:23:27 crc kubenswrapper[4932]: I0218 20:23:27.608862 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:23:27 crc kubenswrapper[4932]: I0218 20:23:27.610407 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:23:27 crc kubenswrapper[4932]: I0218 20:23:27.610559 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" gracePeriod=600 Feb 18 20:23:27 crc kubenswrapper[4932]: E0218 20:23:27.746241 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:23:28 crc kubenswrapper[4932]: I0218 20:23:28.704880 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" exitCode=0 Feb 18 20:23:28 crc kubenswrapper[4932]: I0218 20:23:28.704991 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72"} Feb 18 20:23:28 crc kubenswrapper[4932]: I0218 20:23:28.705214 4932 scope.go:117] "RemoveContainer" containerID="0d641d1880a050cdf1021a445fa79e88f90ca1f340fe0f38bc6a038f7b103aec" Feb 18 20:23:28 crc kubenswrapper[4932]: I0218 20:23:28.705839 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:23:28 crc kubenswrapper[4932]: E0218 20:23:28.706384 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:23:40 crc kubenswrapper[4932]: I0218 20:23:40.179970 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:23:40 crc kubenswrapper[4932]: E0218 20:23:40.182008 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:23:53 crc kubenswrapper[4932]: I0218 20:23:53.180771 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:23:53 crc kubenswrapper[4932]: E0218 20:23:53.181887 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:24:05 crc kubenswrapper[4932]: I0218 20:24:05.233084 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:24:05 crc kubenswrapper[4932]: E0218 20:24:05.233857 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:24:16 crc kubenswrapper[4932]: I0218 20:24:16.180023 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:24:16 crc kubenswrapper[4932]: E0218 20:24:16.181269 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:24:30 crc kubenswrapper[4932]: I0218 20:24:30.179936 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:24:30 crc kubenswrapper[4932]: E0218 20:24:30.180661 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:24:43 crc kubenswrapper[4932]: I0218 20:24:43.179292 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:24:43 crc kubenswrapper[4932]: E0218 20:24:43.180105 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:24:58 crc kubenswrapper[4932]: I0218 20:24:58.179226 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:24:58 crc kubenswrapper[4932]: E0218 20:24:58.180138 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:25:10 crc kubenswrapper[4932]: I0218 20:25:10.179332 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:25:10 crc kubenswrapper[4932]: E0218 20:25:10.180450 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:25:25 crc kubenswrapper[4932]: I0218 20:25:25.180218 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:25:25 crc kubenswrapper[4932]: E0218 20:25:25.181294 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:25:38 crc kubenswrapper[4932]: I0218 20:25:38.178972 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:25:38 crc kubenswrapper[4932]: E0218 20:25:38.179676 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:25:51 crc kubenswrapper[4932]: I0218 20:25:51.179753 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:25:51 crc kubenswrapper[4932]: E0218 20:25:51.180750 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:26:03 crc kubenswrapper[4932]: I0218 20:26:03.180250 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:26:03 crc kubenswrapper[4932]: E0218 20:26:03.181303 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:26:15 crc kubenswrapper[4932]: I0218 20:26:15.179617 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:26:15 crc kubenswrapper[4932]: E0218 20:26:15.180619 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:26:29 crc kubenswrapper[4932]: I0218 20:26:29.180064 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:26:29 crc kubenswrapper[4932]: E0218 20:26:29.181222 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:26:43 crc kubenswrapper[4932]: I0218 20:26:43.179722 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:26:43 crc kubenswrapper[4932]: E0218 20:26:43.180843 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:26:54 crc kubenswrapper[4932]: I0218 20:26:54.179940 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:26:54 crc kubenswrapper[4932]: E0218 20:26:54.181963 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:27:06 crc kubenswrapper[4932]: I0218 20:27:06.179981 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:27:06 crc kubenswrapper[4932]: E0218 20:27:06.181110 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:27:18 crc kubenswrapper[4932]: I0218 20:27:18.179953 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:27:18 crc kubenswrapper[4932]: E0218 20:27:18.180771 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:27:29 crc kubenswrapper[4932]: I0218 20:27:29.179963 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:27:29 crc kubenswrapper[4932]: E0218 20:27:29.180645 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:27:44 crc kubenswrapper[4932]: I0218 20:27:44.179718 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:27:44 crc kubenswrapper[4932]: E0218 20:27:44.180941 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.006890 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jhb45"] Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.010998 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.034671 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhb45"] Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.062300 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a69dedd-7666-4739-af80-59d37eedf9b1-catalog-content\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.062499 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsc85\" (UniqueName: \"kubernetes.io/projected/1a69dedd-7666-4739-af80-59d37eedf9b1-kube-api-access-zsc85\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.062739 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a69dedd-7666-4739-af80-59d37eedf9b1-utilities\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.164397 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a69dedd-7666-4739-af80-59d37eedf9b1-utilities\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.164519 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a69dedd-7666-4739-af80-59d37eedf9b1-catalog-content\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.164572 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsc85\" (UniqueName: \"kubernetes.io/projected/1a69dedd-7666-4739-af80-59d37eedf9b1-kube-api-access-zsc85\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.164899 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a69dedd-7666-4739-af80-59d37eedf9b1-utilities\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.164995 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a69dedd-7666-4739-af80-59d37eedf9b1-catalog-content\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.208660 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsc85\" (UniqueName: \"kubernetes.io/projected/1a69dedd-7666-4739-af80-59d37eedf9b1-kube-api-access-zsc85\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.352798 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.764793 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhb45"] Feb 18 20:27:56 crc kubenswrapper[4932]: I0218 20:27:56.179607 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:27:56 crc kubenswrapper[4932]: E0218 20:27:56.180011 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:27:56 crc kubenswrapper[4932]: I0218 20:27:56.554719 4932 generic.go:334] "Generic (PLEG): container finished" podID="1a69dedd-7666-4739-af80-59d37eedf9b1" containerID="d70b17d5eed673b4dc82174e8289c879ab43a9e04b99bb7ee050e01a1fe688b6" exitCode=0 Feb 18 20:27:56 crc kubenswrapper[4932]: I0218 20:27:56.554814 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhb45" event={"ID":"1a69dedd-7666-4739-af80-59d37eedf9b1","Type":"ContainerDied","Data":"d70b17d5eed673b4dc82174e8289c879ab43a9e04b99bb7ee050e01a1fe688b6"} Feb 18 20:27:56 crc kubenswrapper[4932]: I0218 20:27:56.555045 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhb45" event={"ID":"1a69dedd-7666-4739-af80-59d37eedf9b1","Type":"ContainerStarted","Data":"39755ade2378128d460b755d127aefa3640d1c9e71491bc80b1a7158a1c5985c"} Feb 18 20:27:56 crc kubenswrapper[4932]: I0218 20:27:56.564570 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:27:57 crc kubenswrapper[4932]: I0218 20:27:57.566705 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhb45" event={"ID":"1a69dedd-7666-4739-af80-59d37eedf9b1","Type":"ContainerStarted","Data":"4fdcd4c8b9327eed859283165a216726bc40d559357de1301d6dd45413fabca8"} Feb 18 20:27:59 crc kubenswrapper[4932]: I0218 20:27:59.588973 4932 generic.go:334] "Generic (PLEG): container finished" podID="1a69dedd-7666-4739-af80-59d37eedf9b1" containerID="4fdcd4c8b9327eed859283165a216726bc40d559357de1301d6dd45413fabca8" exitCode=0 Feb 18 20:27:59 crc kubenswrapper[4932]: I0218 20:27:59.589056 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhb45" event={"ID":"1a69dedd-7666-4739-af80-59d37eedf9b1","Type":"ContainerDied","Data":"4fdcd4c8b9327eed859283165a216726bc40d559357de1301d6dd45413fabca8"} Feb 18 20:27:59 crc kubenswrapper[4932]: E0218 20:27:59.986784 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:27:59 crc kubenswrapper[4932]: E0218 20:27:59.986950 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:27:59 crc kubenswrapper[4932]: E0218 20:27:59.988133 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:28:11 crc kubenswrapper[4932]: I0218 20:28:11.180096 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:28:11 crc kubenswrapper[4932]: E0218 20:28:11.181263 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:28:12 crc kubenswrapper[4932]: E0218 20:28:12.079294 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:28:12 crc kubenswrapper[4932]: E0218 20:28:12.079732 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:28:12 crc kubenswrapper[4932]: E0218 20:28:12.081261 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:28:24 crc kubenswrapper[4932]: E0218 20:28:24.186220 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:28:25 crc kubenswrapper[4932]: I0218 20:28:25.179163 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:28:25 crc kubenswrapper[4932]: E0218 20:28:25.179865 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:28:38 crc kubenswrapper[4932]: I0218 20:28:38.195161 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:28:38 crc kubenswrapper[4932]: E0218 20:28:38.560851 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:28:38 crc kubenswrapper[4932]: E0218 20:28:38.561246 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:28:38 crc kubenswrapper[4932]: E0218 20:28:38.562718 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:28:39 crc kubenswrapper[4932]: I0218 20:28:39.271539 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"78c9f9fc09b3668d1f7c86135c8c1bd3ce72f15c084caed76c5a646c505ebcef"} Feb 18 20:28:52 crc kubenswrapper[4932]: E0218 20:28:52.183322 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:29:07 crc kubenswrapper[4932]: E0218 20:29:07.190513 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:29:20 crc kubenswrapper[4932]: E0218 20:29:20.638627 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:29:20 crc kubenswrapper[4932]: E0218 20:29:20.639262 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:29:20 crc kubenswrapper[4932]: E0218 20:29:20.640502 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:29:33 crc kubenswrapper[4932]: E0218 20:29:33.185204 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:29:46 crc kubenswrapper[4932]: E0218 20:29:46.183373 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:29:58 crc kubenswrapper[4932]: E0218 20:29:58.182725 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.164206 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t"] Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.166065 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.169864 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.169927 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.182041 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t"] Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.336530 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxx2d\" (UniqueName: \"kubernetes.io/projected/b626f706-b9f8-4e4b-9230-4af819e3faff-kube-api-access-bxx2d\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.337268 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b626f706-b9f8-4e4b-9230-4af819e3faff-config-volume\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.337458 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b626f706-b9f8-4e4b-9230-4af819e3faff-secret-volume\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.440213 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxx2d\" (UniqueName: \"kubernetes.io/projected/b626f706-b9f8-4e4b-9230-4af819e3faff-kube-api-access-bxx2d\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.440564 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b626f706-b9f8-4e4b-9230-4af819e3faff-config-volume\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.442083 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b626f706-b9f8-4e4b-9230-4af819e3faff-config-volume\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.442270 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b626f706-b9f8-4e4b-9230-4af819e3faff-secret-volume\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.451788 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b626f706-b9f8-4e4b-9230-4af819e3faff-secret-volume\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.467830 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxx2d\" (UniqueName: \"kubernetes.io/projected/b626f706-b9f8-4e4b-9230-4af819e3faff-kube-api-access-bxx2d\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.492589 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:01 crc kubenswrapper[4932]: I0218 20:30:01.040199 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t"] Feb 18 20:30:01 crc kubenswrapper[4932]: W0218 20:30:01.046601 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb626f706_b9f8_4e4b_9230_4af819e3faff.slice/crio-2c924ddfba63066bdf3da8d9735690d1e77dde0a42ed19ccd0af37bec44080dd WatchSource:0}: Error finding container 2c924ddfba63066bdf3da8d9735690d1e77dde0a42ed19ccd0af37bec44080dd: Status 404 returned error can't find the container with id 2c924ddfba63066bdf3da8d9735690d1e77dde0a42ed19ccd0af37bec44080dd Feb 18 20:30:01 crc kubenswrapper[4932]: I0218 20:30:01.298253 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" event={"ID":"b626f706-b9f8-4e4b-9230-4af819e3faff","Type":"ContainerStarted","Data":"2c924ddfba63066bdf3da8d9735690d1e77dde0a42ed19ccd0af37bec44080dd"} Feb 18 20:30:02 crc kubenswrapper[4932]: I0218 20:30:02.311463 4932 generic.go:334] "Generic (PLEG): container finished" podID="b626f706-b9f8-4e4b-9230-4af819e3faff" containerID="a28ea4ba70aaf87a1e424c61365edff478f3b92e18d9a5358cbec200c9470566" exitCode=0 Feb 18 20:30:02 crc kubenswrapper[4932]: I0218 20:30:02.311587 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" event={"ID":"b626f706-b9f8-4e4b-9230-4af819e3faff","Type":"ContainerDied","Data":"a28ea4ba70aaf87a1e424c61365edff478f3b92e18d9a5358cbec200c9470566"} Feb 18 20:30:03 crc kubenswrapper[4932]: I0218 20:30:03.798792 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:03 crc kubenswrapper[4932]: I0218 20:30:03.966952 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b626f706-b9f8-4e4b-9230-4af819e3faff-secret-volume\") pod \"b626f706-b9f8-4e4b-9230-4af819e3faff\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " Feb 18 20:30:03 crc kubenswrapper[4932]: I0218 20:30:03.967073 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b626f706-b9f8-4e4b-9230-4af819e3faff-config-volume\") pod \"b626f706-b9f8-4e4b-9230-4af819e3faff\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " Feb 18 20:30:03 crc kubenswrapper[4932]: I0218 20:30:03.967152 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxx2d\" (UniqueName: \"kubernetes.io/projected/b626f706-b9f8-4e4b-9230-4af819e3faff-kube-api-access-bxx2d\") pod \"b626f706-b9f8-4e4b-9230-4af819e3faff\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " Feb 18 20:30:03 crc kubenswrapper[4932]: I0218 20:30:03.968272 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b626f706-b9f8-4e4b-9230-4af819e3faff-config-volume" (OuterVolumeSpecName: "config-volume") pod "b626f706-b9f8-4e4b-9230-4af819e3faff" (UID: "b626f706-b9f8-4e4b-9230-4af819e3faff"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[4932]: I0218 20:30:03.983680 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b626f706-b9f8-4e4b-9230-4af819e3faff-kube-api-access-bxx2d" (OuterVolumeSpecName: "kube-api-access-bxx2d") pod "b626f706-b9f8-4e4b-9230-4af819e3faff" (UID: "b626f706-b9f8-4e4b-9230-4af819e3faff"). InnerVolumeSpecName "kube-api-access-bxx2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[4932]: I0218 20:30:03.983923 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b626f706-b9f8-4e4b-9230-4af819e3faff-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b626f706-b9f8-4e4b-9230-4af819e3faff" (UID: "b626f706-b9f8-4e4b-9230-4af819e3faff"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.069874 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxx2d\" (UniqueName: \"kubernetes.io/projected/b626f706-b9f8-4e4b-9230-4af819e3faff-kube-api-access-bxx2d\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.069910 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b626f706-b9f8-4e4b-9230-4af819e3faff-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.069922 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b626f706-b9f8-4e4b-9230-4af819e3faff-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.338285 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" event={"ID":"b626f706-b9f8-4e4b-9230-4af819e3faff","Type":"ContainerDied","Data":"2c924ddfba63066bdf3da8d9735690d1e77dde0a42ed19ccd0af37bec44080dd"} Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.338345 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c924ddfba63066bdf3da8d9735690d1e77dde0a42ed19ccd0af37bec44080dd" Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.338348 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.899525 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz"] Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.900566 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz"] Feb 18 20:30:05 crc kubenswrapper[4932]: I0218 20:30:05.192136 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84719922-9618-4293-8f4a-fb525f37eca6" path="/var/lib/kubelet/pods/84719922-9618-4293-8f4a-fb525f37eca6/volumes" Feb 18 20:30:06 crc kubenswrapper[4932]: I0218 20:30:06.540353 4932 scope.go:117] "RemoveContainer" containerID="80752bb80b5cb6dad23a49c747590ff84b2c23ef678e45c05c4cf091b2c9b0a9" Feb 18 20:30:12 crc kubenswrapper[4932]: E0218 20:30:12.183187 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:30:25 crc kubenswrapper[4932]: E0218 20:30:25.200584 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:30:38 crc kubenswrapper[4932]: E0218 20:30:38.182158 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.029793 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pkmxd"] Feb 18 20:30:47 crc kubenswrapper[4932]: E0218 20:30:47.031514 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b626f706-b9f8-4e4b-9230-4af819e3faff" containerName="collect-profiles" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.031538 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b626f706-b9f8-4e4b-9230-4af819e3faff" containerName="collect-profiles" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.031866 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b626f706-b9f8-4e4b-9230-4af819e3faff" containerName="collect-profiles" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.034359 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.046698 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkmxd"] Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.157848 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c30675a-a3c0-497c-804a-42c3640846eb-catalog-content\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.158336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c30675a-a3c0-497c-804a-42c3640846eb-utilities\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.158424 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r77nm\" (UniqueName: \"kubernetes.io/projected/9c30675a-a3c0-497c-804a-42c3640846eb-kube-api-access-r77nm\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.261227 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c30675a-a3c0-497c-804a-42c3640846eb-utilities\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.261372 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r77nm\" (UniqueName: \"kubernetes.io/projected/9c30675a-a3c0-497c-804a-42c3640846eb-kube-api-access-r77nm\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.261604 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c30675a-a3c0-497c-804a-42c3640846eb-catalog-content\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.261981 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c30675a-a3c0-497c-804a-42c3640846eb-utilities\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.262429 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c30675a-a3c0-497c-804a-42c3640846eb-catalog-content\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.305318 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r77nm\" (UniqueName: \"kubernetes.io/projected/9c30675a-a3c0-497c-804a-42c3640846eb-kube-api-access-r77nm\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.387261 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.919283 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkmxd"] Feb 18 20:30:48 crc kubenswrapper[4932]: I0218 20:30:48.885546 4932 generic.go:334] "Generic (PLEG): container finished" podID="9c30675a-a3c0-497c-804a-42c3640846eb" containerID="16369a6bddfc3696f7afc6cc93dd9e7c1afad8ee1bd2329bd714895abb808f8c" exitCode=0 Feb 18 20:30:48 crc kubenswrapper[4932]: I0218 20:30:48.885604 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkmxd" event={"ID":"9c30675a-a3c0-497c-804a-42c3640846eb","Type":"ContainerDied","Data":"16369a6bddfc3696f7afc6cc93dd9e7c1afad8ee1bd2329bd714895abb808f8c"} Feb 18 20:30:48 crc kubenswrapper[4932]: I0218 20:30:48.886377 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkmxd" event={"ID":"9c30675a-a3c0-497c-804a-42c3640846eb","Type":"ContainerStarted","Data":"7d415574aee0d64a48ca3732b58be3fb90a60251f41cdec722659c50ef2bf823"} Feb 18 20:30:49 crc kubenswrapper[4932]: E0218 20:30:49.909552 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:30:49 crc kubenswrapper[4932]: E0218 20:30:49.909703 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:30:49 crc kubenswrapper[4932]: E0218 20:30:49.911048 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:30:50 crc kubenswrapper[4932]: E0218 20:30:50.721475 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:30:50 crc kubenswrapper[4932]: E0218 20:30:50.722536 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:30:50 crc kubenswrapper[4932]: E0218 20:30:50.724604 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:30:50 crc kubenswrapper[4932]: E0218 20:30:50.911157 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:30:51 crc kubenswrapper[4932]: I0218 20:30:51.818739 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s2grr"] Feb 18 20:30:51 crc kubenswrapper[4932]: I0218 20:30:51.822823 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:51 crc kubenswrapper[4932]: I0218 20:30:51.853258 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2grr"] Feb 18 20:30:51 crc kubenswrapper[4932]: I0218 20:30:51.980452 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l46c\" (UniqueName: \"kubernetes.io/projected/088aaa53-25ca-48c3-a904-2af0f07e8c2b-kube-api-access-9l46c\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:51 crc kubenswrapper[4932]: I0218 20:30:51.980682 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088aaa53-25ca-48c3-a904-2af0f07e8c2b-utilities\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:51 crc kubenswrapper[4932]: I0218 20:30:51.980713 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088aaa53-25ca-48c3-a904-2af0f07e8c2b-catalog-content\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.082288 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l46c\" (UniqueName: \"kubernetes.io/projected/088aaa53-25ca-48c3-a904-2af0f07e8c2b-kube-api-access-9l46c\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.082427 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088aaa53-25ca-48c3-a904-2af0f07e8c2b-utilities\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.082451 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088aaa53-25ca-48c3-a904-2af0f07e8c2b-catalog-content\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.083230 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088aaa53-25ca-48c3-a904-2af0f07e8c2b-catalog-content\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.083330 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088aaa53-25ca-48c3-a904-2af0f07e8c2b-utilities\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.104291 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l46c\" (UniqueName: \"kubernetes.io/projected/088aaa53-25ca-48c3-a904-2af0f07e8c2b-kube-api-access-9l46c\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.176628 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: W0218 20:30:52.644231 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod088aaa53_25ca_48c3_a904_2af0f07e8c2b.slice/crio-66624ef687c548b92a5c3ac02a2432d31794e6c72c153d725c6352fff647fd81 WatchSource:0}: Error finding container 66624ef687c548b92a5c3ac02a2432d31794e6c72c153d725c6352fff647fd81: Status 404 returned error can't find the container with id 66624ef687c548b92a5c3ac02a2432d31794e6c72c153d725c6352fff647fd81 Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.644610 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2grr"] Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.933952 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2grr" event={"ID":"088aaa53-25ca-48c3-a904-2af0f07e8c2b","Type":"ContainerStarted","Data":"878fe673902cd80ce6ca092b6e629d1eddb9388f8d8bc0403e169b6a7dcb2669"} Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.934270 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2grr" event={"ID":"088aaa53-25ca-48c3-a904-2af0f07e8c2b","Type":"ContainerStarted","Data":"66624ef687c548b92a5c3ac02a2432d31794e6c72c153d725c6352fff647fd81"} Feb 18 20:30:53 crc kubenswrapper[4932]: I0218 20:30:53.981160 4932 generic.go:334] "Generic (PLEG): container finished" podID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" containerID="878fe673902cd80ce6ca092b6e629d1eddb9388f8d8bc0403e169b6a7dcb2669" exitCode=0 Feb 18 20:30:53 crc kubenswrapper[4932]: I0218 20:30:53.981227 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2grr" event={"ID":"088aaa53-25ca-48c3-a904-2af0f07e8c2b","Type":"ContainerDied","Data":"878fe673902cd80ce6ca092b6e629d1eddb9388f8d8bc0403e169b6a7dcb2669"} Feb 18 20:30:54 crc kubenswrapper[4932]: I0218 20:30:54.993509 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2grr" event={"ID":"088aaa53-25ca-48c3-a904-2af0f07e8c2b","Type":"ContainerStarted","Data":"2951a6d2af3494aafb188ed0909a2d2930889940302fbfb9916042c18e76b0ac"} Feb 18 20:30:56 crc kubenswrapper[4932]: I0218 20:30:56.006488 4932 generic.go:334] "Generic (PLEG): container finished" podID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" containerID="2951a6d2af3494aafb188ed0909a2d2930889940302fbfb9916042c18e76b0ac" exitCode=0 Feb 18 20:30:56 crc kubenswrapper[4932]: I0218 20:30:56.006569 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2grr" event={"ID":"088aaa53-25ca-48c3-a904-2af0f07e8c2b","Type":"ContainerDied","Data":"2951a6d2af3494aafb188ed0909a2d2930889940302fbfb9916042c18e76b0ac"} Feb 18 20:30:56 crc kubenswrapper[4932]: E0218 20:30:56.408696 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:30:56 crc kubenswrapper[4932]: E0218 20:30:56.409438 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:30:56 crc kubenswrapper[4932]: E0218 20:30:56.410705 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:30:57 crc kubenswrapper[4932]: E0218 20:30:57.022544 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:30:57 crc kubenswrapper[4932]: I0218 20:30:57.606340 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:30:57 crc kubenswrapper[4932]: I0218 20:30:57.606408 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:31:01 crc kubenswrapper[4932]: E0218 20:31:01.945031 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:31:01 crc kubenswrapper[4932]: E0218 20:31:01.945608 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:31:01 crc kubenswrapper[4932]: E0218 20:31:01.946823 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:31:04 crc kubenswrapper[4932]: E0218 20:31:04.183910 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:31:10 crc kubenswrapper[4932]: E0218 20:31:10.948655 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:31:10 crc kubenswrapper[4932]: E0218 20:31:10.949591 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:31:10 crc kubenswrapper[4932]: E0218 20:31:10.950906 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:31:16 crc kubenswrapper[4932]: E0218 20:31:16.181967 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:31:19 crc kubenswrapper[4932]: E0218 20:31:19.182821 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:31:24 crc kubenswrapper[4932]: E0218 20:31:24.182170 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:31:27 crc kubenswrapper[4932]: I0218 20:31:27.606113 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:31:27 crc kubenswrapper[4932]: I0218 20:31:27.606783 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:31:28 crc kubenswrapper[4932]: E0218 20:31:28.848275 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:31:28 crc kubenswrapper[4932]: E0218 20:31:28.848493 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:31:28 crc kubenswrapper[4932]: E0218 20:31:28.849800 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:31:34 crc kubenswrapper[4932]: E0218 20:31:34.182984 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:31:36 crc kubenswrapper[4932]: E0218 20:31:36.692602 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:31:36 crc kubenswrapper[4932]: E0218 20:31:36.694947 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:31:36 crc kubenswrapper[4932]: E0218 20:31:36.696403 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:31:43 crc kubenswrapper[4932]: E0218 20:31:43.184910 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:31:47 crc kubenswrapper[4932]: E0218 20:31:47.182047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:31:52 crc kubenswrapper[4932]: E0218 20:31:52.182285 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:31:56 crc kubenswrapper[4932]: E0218 20:31:56.181925 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.606792 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.607293 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.607351 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.608453 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78c9f9fc09b3668d1f7c86135c8c1bd3ce72f15c084caed76c5a646c505ebcef"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.608552 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://78c9f9fc09b3668d1f7c86135c8c1bd3ce72f15c084caed76c5a646c505ebcef" gracePeriod=600 Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.787482 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="78c9f9fc09b3668d1f7c86135c8c1bd3ce72f15c084caed76c5a646c505ebcef" exitCode=0 Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.787623 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"78c9f9fc09b3668d1f7c86135c8c1bd3ce72f15c084caed76c5a646c505ebcef"} Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.787983 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:31:58 crc kubenswrapper[4932]: I0218 20:31:58.803742 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8"} Feb 18 20:32:01 crc kubenswrapper[4932]: E0218 20:32:01.186493 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:32:07 crc kubenswrapper[4932]: E0218 20:32:07.195607 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:32:11 crc kubenswrapper[4932]: E0218 20:32:11.225245 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:32:11 crc kubenswrapper[4932]: E0218 20:32:11.226092 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:32:11 crc kubenswrapper[4932]: E0218 20:32:11.227414 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:32:16 crc kubenswrapper[4932]: E0218 20:32:16.182619 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:32:20 crc kubenswrapper[4932]: E0218 20:32:20.537507 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:32:20 crc kubenswrapper[4932]: E0218 20:32:20.538425 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:32:20 crc kubenswrapper[4932]: E0218 20:32:20.539692 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:32:22 crc kubenswrapper[4932]: E0218 20:32:22.181054 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:32:30 crc kubenswrapper[4932]: E0218 20:32:30.181907 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:32:33 crc kubenswrapper[4932]: E0218 20:32:33.182752 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:32:34 crc kubenswrapper[4932]: E0218 20:32:34.184142 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:32:43 crc kubenswrapper[4932]: E0218 20:32:43.186205 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:32:44 crc kubenswrapper[4932]: E0218 20:32:44.181550 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:32:47 crc kubenswrapper[4932]: E0218 20:32:47.197057 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:32:57 crc kubenswrapper[4932]: E0218 20:32:57.199079 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:32:58 crc kubenswrapper[4932]: E0218 20:32:58.183684 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:33:02 crc kubenswrapper[4932]: E0218 20:33:02.181506 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:33:09 crc kubenswrapper[4932]: E0218 20:33:09.183688 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:33:12 crc kubenswrapper[4932]: E0218 20:33:12.181070 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:33:13 crc kubenswrapper[4932]: E0218 20:33:13.181690 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:33:24 crc kubenswrapper[4932]: E0218 20:33:24.183768 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:33:24 crc kubenswrapper[4932]: E0218 20:33:24.183768 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:33:25 crc kubenswrapper[4932]: E0218 20:33:25.182955 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:33:36 crc kubenswrapper[4932]: I0218 20:33:36.183005 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:33:36 crc kubenswrapper[4932]: E0218 20:33:36.627009 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:33:36 crc kubenswrapper[4932]: E0218 20:33:36.627196 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:33:36 crc kubenswrapper[4932]: E0218 20:33:36.628403 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:33:39 crc kubenswrapper[4932]: E0218 20:33:39.182447 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:33:39 crc kubenswrapper[4932]: E0218 20:33:39.603012 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:33:39 crc kubenswrapper[4932]: E0218 20:33:39.603265 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:33:39 crc kubenswrapper[4932]: E0218 20:33:39.604556 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:33:48 crc kubenswrapper[4932]: E0218 20:33:48.182572 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:33:53 crc kubenswrapper[4932]: E0218 20:33:53.186211 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:33:55 crc kubenswrapper[4932]: E0218 20:33:55.581629 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:33:55 crc kubenswrapper[4932]: E0218 20:33:55.583070 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:33:55 crc kubenswrapper[4932]: E0218 20:33:55.584483 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:34:02 crc kubenswrapper[4932]: E0218 20:34:02.184414 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:34:04 crc kubenswrapper[4932]: E0218 20:34:04.182461 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:34:10 crc kubenswrapper[4932]: E0218 20:34:10.180584 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:34:13 crc kubenswrapper[4932]: E0218 20:34:13.181924 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:34:18 crc kubenswrapper[4932]: E0218 20:34:18.970479 4932 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.190:43576->38.102.83.190:41227: write tcp 38.102.83.190:43576->38.102.83.190:41227: write: broken pipe Feb 18 20:34:19 crc kubenswrapper[4932]: E0218 20:34:19.182592 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:34:23 crc kubenswrapper[4932]: E0218 20:34:23.183115 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:34:24 crc kubenswrapper[4932]: E0218 20:34:24.181794 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:34:27 crc kubenswrapper[4932]: I0218 20:34:27.606624 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:34:27 crc kubenswrapper[4932]: I0218 20:34:27.607401 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:34:34 crc kubenswrapper[4932]: E0218 20:34:34.184603 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:34:35 crc kubenswrapper[4932]: E0218 20:34:35.181791 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:34:38 crc kubenswrapper[4932]: E0218 20:34:38.183514 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:34:46 crc kubenswrapper[4932]: E0218 20:34:46.183608 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:34:47 crc kubenswrapper[4932]: E0218 20:34:47.195094 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:34:50 crc kubenswrapper[4932]: E0218 20:34:50.181767 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:34:57 crc kubenswrapper[4932]: I0218 20:34:57.606705 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:34:57 crc kubenswrapper[4932]: I0218 20:34:57.607552 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:34:58 crc kubenswrapper[4932]: E0218 20:34:58.182453 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:34:59 crc kubenswrapper[4932]: E0218 20:34:59.183688 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:35:03 crc kubenswrapper[4932]: E0218 20:35:03.184305 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.001874 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8slcg"] Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.005930 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.045903 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8slcg"] Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.065271 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57dbf2a4-5676-4291-911d-00038d3c7c75-utilities\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.065349 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz4sh\" (UniqueName: \"kubernetes.io/projected/57dbf2a4-5676-4291-911d-00038d3c7c75-kube-api-access-xz4sh\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.065407 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57dbf2a4-5676-4291-911d-00038d3c7c75-catalog-content\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.167718 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57dbf2a4-5676-4291-911d-00038d3c7c75-utilities\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.167799 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz4sh\" (UniqueName: \"kubernetes.io/projected/57dbf2a4-5676-4291-911d-00038d3c7c75-kube-api-access-xz4sh\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.167848 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57dbf2a4-5676-4291-911d-00038d3c7c75-catalog-content\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.168496 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57dbf2a4-5676-4291-911d-00038d3c7c75-utilities\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.168599 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57dbf2a4-5676-4291-911d-00038d3c7c75-catalog-content\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.212209 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz4sh\" (UniqueName: \"kubernetes.io/projected/57dbf2a4-5676-4291-911d-00038d3c7c75-kube-api-access-xz4sh\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.377242 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.923091 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8slcg"] Feb 18 20:35:05 crc kubenswrapper[4932]: I0218 20:35:05.023344 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8slcg" event={"ID":"57dbf2a4-5676-4291-911d-00038d3c7c75","Type":"ContainerStarted","Data":"c7f2e7fbb70d9869728b6339d84005be8b57a54314e232383f2b03d5551f1998"} Feb 18 20:35:06 crc kubenswrapper[4932]: I0218 20:35:06.035454 4932 generic.go:334] "Generic (PLEG): container finished" podID="57dbf2a4-5676-4291-911d-00038d3c7c75" containerID="e37184b7e67125463f4eb5eda4953c7cfea3d3bd4c0efeedfc7b40d067b85b17" exitCode=0 Feb 18 20:35:06 crc kubenswrapper[4932]: I0218 20:35:06.035503 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8slcg" event={"ID":"57dbf2a4-5676-4291-911d-00038d3c7c75","Type":"ContainerDied","Data":"e37184b7e67125463f4eb5eda4953c7cfea3d3bd4c0efeedfc7b40d067b85b17"} Feb 18 20:35:07 crc kubenswrapper[4932]: E0218 20:35:07.751905 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:35:07 crc kubenswrapper[4932]: E0218 20:35:07.752550 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:07 crc kubenswrapper[4932]: E0218 20:35:07.754044 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:35:08 crc kubenswrapper[4932]: E0218 20:35:08.061380 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:35:10 crc kubenswrapper[4932]: E0218 20:35:10.181716 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:35:11 crc kubenswrapper[4932]: E0218 20:35:11.182309 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:35:14 crc kubenswrapper[4932]: E0218 20:35:14.183630 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:35:19 crc kubenswrapper[4932]: E0218 20:35:19.736780 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:35:19 crc kubenswrapper[4932]: E0218 20:35:19.737987 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:19 crc kubenswrapper[4932]: E0218 20:35:19.739305 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:35:22 crc kubenswrapper[4932]: E0218 20:35:22.183308 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:35:23 crc kubenswrapper[4932]: E0218 20:35:23.182573 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:35:27 crc kubenswrapper[4932]: I0218 20:35:27.606114 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:35:27 crc kubenswrapper[4932]: I0218 20:35:27.607808 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:35:27 crc kubenswrapper[4932]: I0218 20:35:27.607962 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:35:27 crc kubenswrapper[4932]: I0218 20:35:27.608845 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:35:27 crc kubenswrapper[4932]: I0218 20:35:27.609026 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" gracePeriod=600 Feb 18 20:35:27 crc kubenswrapper[4932]: E0218 20:35:27.745625 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:35:28 crc kubenswrapper[4932]: E0218 20:35:28.181063 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:35:28 crc kubenswrapper[4932]: I0218 20:35:28.310790 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" exitCode=0 Feb 18 20:35:28 crc kubenswrapper[4932]: I0218 20:35:28.310862 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8"} Feb 18 20:35:28 crc kubenswrapper[4932]: I0218 20:35:28.310919 4932 scope.go:117] "RemoveContainer" containerID="78c9f9fc09b3668d1f7c86135c8c1bd3ce72f15c084caed76c5a646c505ebcef" Feb 18 20:35:28 crc kubenswrapper[4932]: I0218 20:35:28.312527 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:35:28 crc kubenswrapper[4932]: E0218 20:35:28.316709 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:35:31 crc kubenswrapper[4932]: E0218 20:35:31.183756 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:35:35 crc kubenswrapper[4932]: E0218 20:35:35.180816 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:35:36 crc kubenswrapper[4932]: E0218 20:35:36.183621 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:35:41 crc kubenswrapper[4932]: E0218 20:35:41.183671 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:35:44 crc kubenswrapper[4932]: I0218 20:35:44.180132 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:35:44 crc kubenswrapper[4932]: E0218 20:35:44.180821 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:35:44 crc kubenswrapper[4932]: E0218 20:35:44.714791 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:35:44 crc kubenswrapper[4932]: E0218 20:35:44.715027 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:44 crc kubenswrapper[4932]: E0218 20:35:44.716294 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:35:46 crc kubenswrapper[4932]: E0218 20:35:46.183144 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:35:49 crc kubenswrapper[4932]: E0218 20:35:49.183031 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:35:55 crc kubenswrapper[4932]: E0218 20:35:55.193400 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:35:56 crc kubenswrapper[4932]: E0218 20:35:56.182260 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:35:57 crc kubenswrapper[4932]: I0218 20:35:57.188703 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:35:57 crc kubenswrapper[4932]: E0218 20:35:57.189769 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:35:59 crc kubenswrapper[4932]: E0218 20:35:59.182564 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:36:01 crc kubenswrapper[4932]: E0218 20:36:01.182542 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:36:07 crc kubenswrapper[4932]: E0218 20:36:07.197996 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:36:09 crc kubenswrapper[4932]: E0218 20:36:09.183099 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:36:11 crc kubenswrapper[4932]: I0218 20:36:11.179979 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:36:11 crc kubenswrapper[4932]: E0218 20:36:11.180884 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:36:14 crc kubenswrapper[4932]: E0218 20:36:14.190777 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:36:16 crc kubenswrapper[4932]: E0218 20:36:16.181370 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:36:21 crc kubenswrapper[4932]: E0218 20:36:21.183217 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:36:22 crc kubenswrapper[4932]: E0218 20:36:22.181947 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:36:23 crc kubenswrapper[4932]: I0218 20:36:23.179115 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:36:23 crc kubenswrapper[4932]: E0218 20:36:23.179479 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:36:26 crc kubenswrapper[4932]: E0218 20:36:26.323236 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:36:26 crc kubenswrapper[4932]: E0218 20:36:26.324044 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:26 crc kubenswrapper[4932]: E0218 20:36:26.325280 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:36:30 crc kubenswrapper[4932]: E0218 20:36:30.180972 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:36:33 crc kubenswrapper[4932]: E0218 20:36:33.112583 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:36:33 crc kubenswrapper[4932]: E0218 20:36:33.113403 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:33 crc kubenswrapper[4932]: E0218 20:36:33.115103 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:36:35 crc kubenswrapper[4932]: E0218 20:36:35.182458 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:36:37 crc kubenswrapper[4932]: I0218 20:36:37.186043 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:36:37 crc kubenswrapper[4932]: E0218 20:36:37.187071 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:36:42 crc kubenswrapper[4932]: E0218 20:36:42.181553 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:36:42 crc kubenswrapper[4932]: E0218 20:36:42.182047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:36:48 crc kubenswrapper[4932]: E0218 20:36:48.184876 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:36:49 crc kubenswrapper[4932]: I0218 20:36:49.180627 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:36:49 crc kubenswrapper[4932]: E0218 20:36:49.181286 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:36:50 crc kubenswrapper[4932]: E0218 20:36:50.764410 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:36:50 crc kubenswrapper[4932]: E0218 20:36:50.764944 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:50 crc kubenswrapper[4932]: E0218 20:36:50.766759 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:36:55 crc kubenswrapper[4932]: E0218 20:36:55.186386 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:36:57 crc kubenswrapper[4932]: E0218 20:36:57.194332 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:37:00 crc kubenswrapper[4932]: I0218 20:37:00.180656 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:37:00 crc kubenswrapper[4932]: E0218 20:37:00.181291 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:37:04 crc kubenswrapper[4932]: E0218 20:37:04.186600 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:37:04 crc kubenswrapper[4932]: E0218 20:37:04.186618 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:37:06 crc kubenswrapper[4932]: E0218 20:37:06.180669 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:37:12 crc kubenswrapper[4932]: E0218 20:37:12.182445 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:37:15 crc kubenswrapper[4932]: I0218 20:37:15.194727 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:37:15 crc kubenswrapper[4932]: E0218 20:37:15.195637 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:37:17 crc kubenswrapper[4932]: E0218 20:37:17.190943 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:37:17 crc kubenswrapper[4932]: E0218 20:37:17.191509 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:37:17 crc kubenswrapper[4932]: E0218 20:37:17.190836 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:37:23 crc kubenswrapper[4932]: E0218 20:37:23.184824 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:37:27 crc kubenswrapper[4932]: I0218 20:37:27.194424 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:37:27 crc kubenswrapper[4932]: E0218 20:37:27.195770 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:37:28 crc kubenswrapper[4932]: E0218 20:37:28.183417 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:37:30 crc kubenswrapper[4932]: E0218 20:37:30.181916 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:37:31 crc kubenswrapper[4932]: E0218 20:37:31.183784 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:37:35 crc kubenswrapper[4932]: E0218 20:37:35.184279 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:37:39 crc kubenswrapper[4932]: I0218 20:37:39.180422 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:37:39 crc kubenswrapper[4932]: E0218 20:37:39.181451 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:37:42 crc kubenswrapper[4932]: E0218 20:37:42.183207 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:37:44 crc kubenswrapper[4932]: E0218 20:37:44.182271 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:37:45 crc kubenswrapper[4932]: E0218 20:37:45.181426 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:37:48 crc kubenswrapper[4932]: E0218 20:37:48.183340 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:37:50 crc kubenswrapper[4932]: I0218 20:37:50.179432 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:37:50 crc kubenswrapper[4932]: E0218 20:37:50.180086 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:37:53 crc kubenswrapper[4932]: E0218 20:37:53.183223 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:37:56 crc kubenswrapper[4932]: E0218 20:37:56.183820 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:37:57 crc kubenswrapper[4932]: E0218 20:37:57.363295 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:37:57 crc kubenswrapper[4932]: E0218 20:37:57.363926 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:57 crc kubenswrapper[4932]: E0218 20:37:57.365163 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:38:02 crc kubenswrapper[4932]: E0218 20:38:02.181360 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:38:05 crc kubenswrapper[4932]: I0218 20:38:05.178864 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:38:05 crc kubenswrapper[4932]: E0218 20:38:05.179594 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:38:08 crc kubenswrapper[4932]: E0218 20:38:08.182902 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:38:11 crc kubenswrapper[4932]: E0218 20:38:11.183299 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:38:13 crc kubenswrapper[4932]: E0218 20:38:13.181116 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:38:16 crc kubenswrapper[4932]: E0218 20:38:16.182157 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:38:19 crc kubenswrapper[4932]: I0218 20:38:19.180209 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:38:19 crc kubenswrapper[4932]: E0218 20:38:19.181229 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:38:22 crc kubenswrapper[4932]: E0218 20:38:22.183134 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:38:25 crc kubenswrapper[4932]: E0218 20:38:25.182393 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:38:26 crc kubenswrapper[4932]: E0218 20:38:26.181542 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:38:29 crc kubenswrapper[4932]: E0218 20:38:29.183269 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:38:31 crc kubenswrapper[4932]: I0218 20:38:31.180440 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:38:31 crc kubenswrapper[4932]: E0218 20:38:31.182838 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:38:35 crc kubenswrapper[4932]: E0218 20:38:35.181955 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:38:40 crc kubenswrapper[4932]: E0218 20:38:40.181745 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:38:40 crc kubenswrapper[4932]: I0218 20:38:40.182527 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:38:41 crc kubenswrapper[4932]: E0218 20:38:41.181082 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:38:43 crc kubenswrapper[4932]: E0218 20:38:43.248787 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:38:43 crc kubenswrapper[4932]: E0218 20:38:43.249049 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:38:43 crc kubenswrapper[4932]: E0218 20:38:43.250202 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:38:46 crc kubenswrapper[4932]: I0218 20:38:46.179407 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:38:46 crc kubenswrapper[4932]: E0218 20:38:46.180254 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:38:48 crc kubenswrapper[4932]: E0218 20:38:48.182827 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:38:54 crc kubenswrapper[4932]: E0218 20:38:54.186842 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:38:55 crc kubenswrapper[4932]: E0218 20:38:55.182023 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:38:58 crc kubenswrapper[4932]: E0218 20:38:58.182364 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:38:59 crc kubenswrapper[4932]: I0218 20:38:59.179356 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:38:59 crc kubenswrapper[4932]: E0218 20:38:59.180055 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:39:01 crc kubenswrapper[4932]: E0218 20:39:01.183615 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:39:07 crc kubenswrapper[4932]: E0218 20:39:07.190782 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:39:07 crc kubenswrapper[4932]: E0218 20:39:07.190871 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:39:12 crc kubenswrapper[4932]: E0218 20:39:12.182581 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:39:14 crc kubenswrapper[4932]: I0218 20:39:14.179616 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:39:14 crc kubenswrapper[4932]: E0218 20:39:14.180574 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:39:16 crc kubenswrapper[4932]: E0218 20:39:16.182479 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:39:18 crc kubenswrapper[4932]: E0218 20:39:18.195111 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:39:18 crc kubenswrapper[4932]: E0218 20:39:18.214043 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:39:25 crc kubenswrapper[4932]: E0218 20:39:25.182580 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:39:27 crc kubenswrapper[4932]: I0218 20:39:27.196896 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:39:27 crc kubenswrapper[4932]: E0218 20:39:27.198061 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:39:28 crc kubenswrapper[4932]: E0218 20:39:28.182132 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:39:32 crc kubenswrapper[4932]: E0218 20:39:32.182024 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:39:33 crc kubenswrapper[4932]: E0218 20:39:33.182667 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:39:39 crc kubenswrapper[4932]: I0218 20:39:39.180720 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:39:39 crc kubenswrapper[4932]: E0218 20:39:39.183102 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:39:40 crc kubenswrapper[4932]: E0218 20:39:40.184248 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:39:40 crc kubenswrapper[4932]: E0218 20:39:40.184418 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:39:45 crc kubenswrapper[4932]: E0218 20:39:45.182717 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:39:45 crc kubenswrapper[4932]: E0218 20:39:45.183465 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:39:51 crc kubenswrapper[4932]: E0218 20:39:51.184708 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:39:52 crc kubenswrapper[4932]: E0218 20:39:52.183035 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:39:54 crc kubenswrapper[4932]: I0218 20:39:54.180825 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:39:54 crc kubenswrapper[4932]: E0218 20:39:54.181802 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:39:56 crc kubenswrapper[4932]: E0218 20:39:56.183689 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:40:00 crc kubenswrapper[4932]: E0218 20:40:00.182492 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:40:05 crc kubenswrapper[4932]: E0218 20:40:05.436507 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:40:06 crc kubenswrapper[4932]: E0218 20:40:06.183511 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:40:07 crc kubenswrapper[4932]: I0218 20:40:07.198906 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:40:07 crc kubenswrapper[4932]: E0218 20:40:07.199798 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:40:10 crc kubenswrapper[4932]: E0218 20:40:10.181790 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:40:13 crc kubenswrapper[4932]: E0218 20:40:13.542496 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:40:19 crc kubenswrapper[4932]: E0218 20:40:19.183424 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:40:20 crc kubenswrapper[4932]: I0218 20:40:20.180222 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:40:20 crc kubenswrapper[4932]: E0218 20:40:20.180545 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:40:21 crc kubenswrapper[4932]: E0218 20:40:21.183499 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:40:23 crc kubenswrapper[4932]: E0218 20:40:23.182751 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:40:27 crc kubenswrapper[4932]: E0218 20:40:27.199849 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:40:34 crc kubenswrapper[4932]: I0218 20:40:34.182213 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:40:34 crc kubenswrapper[4932]: E0218 20:40:34.186450 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:40:34 crc kubenswrapper[4932]: I0218 20:40:34.821682 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"8907f611fdc4bb018a8a5f1574c9b0677d04bcde1f4d724c23d8ea1124f73d8b"} Feb 18 20:40:35 crc kubenswrapper[4932]: E0218 20:40:35.181576 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:40:35 crc kubenswrapper[4932]: E0218 20:40:35.181808 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:40:44 crc kubenswrapper[4932]: E0218 20:40:44.607393 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:40:44 crc kubenswrapper[4932]: E0218 20:40:44.608081 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:40:44 crc kubenswrapper[4932]: E0218 20:40:44.610242 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:40:47 crc kubenswrapper[4932]: E0218 20:40:47.204674 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:40:48 crc kubenswrapper[4932]: E0218 20:40:48.183267 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:40:49 crc kubenswrapper[4932]: E0218 20:40:49.184130 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:40:56 crc kubenswrapper[4932]: E0218 20:40:56.182763 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:41:01 crc kubenswrapper[4932]: E0218 20:41:01.183247 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:41:01 crc kubenswrapper[4932]: E0218 20:41:01.183323 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:41:01 crc kubenswrapper[4932]: E0218 20:41:01.183544 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:41:11 crc kubenswrapper[4932]: E0218 20:41:11.183107 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:41:12 crc kubenswrapper[4932]: E0218 20:41:12.180672 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:41:14 crc kubenswrapper[4932]: E0218 20:41:14.181460 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:41:15 crc kubenswrapper[4932]: E0218 20:41:15.182407 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:41:24 crc kubenswrapper[4932]: E0218 20:41:24.182612 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:41:26 crc kubenswrapper[4932]: E0218 20:41:26.182389 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:41:26 crc kubenswrapper[4932]: E0218 20:41:26.182399 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:41:27 crc kubenswrapper[4932]: E0218 20:41:27.195824 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:41:38 crc kubenswrapper[4932]: E0218 20:41:38.184568 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:41:39 crc kubenswrapper[4932]: E0218 20:41:39.183871 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:41:39 crc kubenswrapper[4932]: E0218 20:41:39.184268 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:41:40 crc kubenswrapper[4932]: E0218 20:41:40.621642 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:41:40 crc kubenswrapper[4932]: E0218 20:41:40.622278 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:41:40 crc kubenswrapper[4932]: E0218 20:41:40.623562 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:41:50 crc kubenswrapper[4932]: E0218 20:41:50.183646 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:41:51 crc kubenswrapper[4932]: E0218 20:41:51.182087 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:41:51 crc kubenswrapper[4932]: E0218 20:41:51.182926 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:41:54 crc kubenswrapper[4932]: E0218 20:41:54.183441 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:42:02 crc kubenswrapper[4932]: E0218 20:42:02.183945 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:42:02 crc kubenswrapper[4932]: E0218 20:42:02.945021 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:42:02 crc kubenswrapper[4932]: E0218 20:42:02.945550 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:42:02 crc kubenswrapper[4932]: E0218 20:42:02.946793 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:42:04 crc kubenswrapper[4932]: E0218 20:42:04.183933 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:42:08 crc kubenswrapper[4932]: E0218 20:42:08.184990 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:42:15 crc kubenswrapper[4932]: E0218 20:42:15.183996 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:42:15 crc kubenswrapper[4932]: E0218 20:42:15.184613 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:42:18 crc kubenswrapper[4932]: E0218 20:42:18.181607 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:42:19 crc kubenswrapper[4932]: E0218 20:42:19.182627 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:42:26 crc kubenswrapper[4932]: E0218 20:42:26.182675 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:42:28 crc kubenswrapper[4932]: E0218 20:42:28.182093 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:42:30 crc kubenswrapper[4932]: E0218 20:42:30.182249 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:42:31 crc kubenswrapper[4932]: E0218 20:42:31.182918 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:42:40 crc kubenswrapper[4932]: E0218 20:42:40.181697 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:42:40 crc kubenswrapper[4932]: E0218 20:42:40.181767 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:42:41 crc kubenswrapper[4932]: E0218 20:42:41.182439 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:42:44 crc kubenswrapper[4932]: E0218 20:42:44.182253 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:42:52 crc kubenswrapper[4932]: E0218 20:42:52.185129 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:42:53 crc kubenswrapper[4932]: E0218 20:42:53.183499 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:42:56 crc kubenswrapper[4932]: E0218 20:42:56.183133 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:42:57 crc kubenswrapper[4932]: I0218 20:42:57.606332 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:42:57 crc kubenswrapper[4932]: I0218 20:42:57.606779 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:42:59 crc kubenswrapper[4932]: E0218 20:42:59.181929 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:43:04 crc kubenswrapper[4932]: E0218 20:43:04.189236 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:43:07 crc kubenswrapper[4932]: E0218 20:43:07.196824 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:43:10 crc kubenswrapper[4932]: E0218 20:43:10.182162 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:43:12 crc kubenswrapper[4932]: E0218 20:43:12.181635 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:43:16 crc kubenswrapper[4932]: E0218 20:43:16.182654 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:43:18 crc kubenswrapper[4932]: E0218 20:43:18.181645 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:43:24 crc kubenswrapper[4932]: E0218 20:43:24.183744 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:43:26 crc kubenswrapper[4932]: E0218 20:43:26.182528 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:43:27 crc kubenswrapper[4932]: I0218 20:43:27.606336 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:43:27 crc kubenswrapper[4932]: I0218 20:43:27.606618 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:43:29 crc kubenswrapper[4932]: E0218 20:43:29.182508 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:43:31 crc kubenswrapper[4932]: E0218 20:43:31.182985 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:43:35 crc kubenswrapper[4932]: E0218 20:43:35.182682 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:43:37 crc kubenswrapper[4932]: E0218 20:43:37.182963 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:43:43 crc kubenswrapper[4932]: E0218 20:43:43.183361 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:43:44 crc kubenswrapper[4932]: I0218 20:43:44.185825 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:43:46 crc kubenswrapper[4932]: E0218 20:43:46.680094 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:43:46 crc kubenswrapper[4932]: E0218 20:43:46.681153 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:43:46 crc kubenswrapper[4932]: E0218 20:43:46.682558 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:43:48 crc kubenswrapper[4932]: E0218 20:43:48.181793 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:43:50 crc kubenswrapper[4932]: E0218 20:43:50.181691 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:43:54 crc kubenswrapper[4932]: E0218 20:43:54.183253 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:43:57 crc kubenswrapper[4932]: I0218 20:43:57.606124 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:43:57 crc kubenswrapper[4932]: I0218 20:43:57.606807 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:43:57 crc kubenswrapper[4932]: I0218 20:43:57.606868 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:43:57 crc kubenswrapper[4932]: I0218 20:43:57.608071 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8907f611fdc4bb018a8a5f1574c9b0677d04bcde1f4d724c23d8ea1124f73d8b"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:43:57 crc kubenswrapper[4932]: I0218 20:43:57.608204 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://8907f611fdc4bb018a8a5f1574c9b0677d04bcde1f4d724c23d8ea1124f73d8b" gracePeriod=600 Feb 18 20:43:58 crc kubenswrapper[4932]: I0218 20:43:58.349862 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="8907f611fdc4bb018a8a5f1574c9b0677d04bcde1f4d724c23d8ea1124f73d8b" exitCode=0 Feb 18 20:43:58 crc kubenswrapper[4932]: I0218 20:43:58.349991 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"8907f611fdc4bb018a8a5f1574c9b0677d04bcde1f4d724c23d8ea1124f73d8b"} Feb 18 20:43:58 crc kubenswrapper[4932]: I0218 20:43:58.350438 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:43:59 crc kubenswrapper[4932]: E0218 20:43:59.182860 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:43:59 crc kubenswrapper[4932]: I0218 20:43:59.359500 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e"} Feb 18 20:44:01 crc kubenswrapper[4932]: E0218 20:44:01.198328 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:44:02 crc kubenswrapper[4932]: E0218 20:44:02.181519 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:44:08 crc kubenswrapper[4932]: E0218 20:44:08.182571 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:44:13 crc kubenswrapper[4932]: E0218 20:44:13.185698 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:44:13 crc kubenswrapper[4932]: E0218 20:44:13.186041 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:44:14 crc kubenswrapper[4932]: E0218 20:44:14.183701 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:44:20 crc kubenswrapper[4932]: E0218 20:44:20.182748 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:44:26 crc kubenswrapper[4932]: E0218 20:44:26.183286 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:44:28 crc kubenswrapper[4932]: E0218 20:44:28.184213 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:44:28 crc kubenswrapper[4932]: E0218 20:44:28.184263 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:44:35 crc kubenswrapper[4932]: E0218 20:44:35.187571 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:44:37 crc kubenswrapper[4932]: E0218 20:44:37.214814 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:44:40 crc kubenswrapper[4932]: E0218 20:44:40.181315 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:44:43 crc kubenswrapper[4932]: E0218 20:44:43.183842 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:44:47 crc kubenswrapper[4932]: E0218 20:44:47.191985 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:44:50 crc kubenswrapper[4932]: E0218 20:44:50.188395 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:44:55 crc kubenswrapper[4932]: E0218 20:44:55.183196 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:44:56 crc kubenswrapper[4932]: E0218 20:44:56.609248 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.209504 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb"] Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.211689 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.214413 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.224706 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb"] Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.226884 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.271338 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6gnj\" (UniqueName: \"kubernetes.io/projected/3d2e2003-21f3-440a-85dc-1b34c00c6199-kube-api-access-d6gnj\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.271412 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d2e2003-21f3-440a-85dc-1b34c00c6199-config-volume\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.271461 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d2e2003-21f3-440a-85dc-1b34c00c6199-secret-volume\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.374502 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6gnj\" (UniqueName: \"kubernetes.io/projected/3d2e2003-21f3-440a-85dc-1b34c00c6199-kube-api-access-d6gnj\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.374591 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d2e2003-21f3-440a-85dc-1b34c00c6199-config-volume\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.374646 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d2e2003-21f3-440a-85dc-1b34c00c6199-secret-volume\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.375878 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d2e2003-21f3-440a-85dc-1b34c00c6199-config-volume\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.391761 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d2e2003-21f3-440a-85dc-1b34c00c6199-secret-volume\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.392192 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6gnj\" (UniqueName: \"kubernetes.io/projected/3d2e2003-21f3-440a-85dc-1b34c00c6199-kube-api-access-d6gnj\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.542404 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:01 crc kubenswrapper[4932]: I0218 20:45:01.085103 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb"] Feb 18 20:45:01 crc kubenswrapper[4932]: I0218 20:45:01.691502 4932 generic.go:334] "Generic (PLEG): container finished" podID="3d2e2003-21f3-440a-85dc-1b34c00c6199" containerID="df58ee848cab5e8c5456dcfe68de60c64126c02fb04b91974217490a862cd781" exitCode=0 Feb 18 20:45:01 crc kubenswrapper[4932]: I0218 20:45:01.691557 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" event={"ID":"3d2e2003-21f3-440a-85dc-1b34c00c6199","Type":"ContainerDied","Data":"df58ee848cab5e8c5456dcfe68de60c64126c02fb04b91974217490a862cd781"} Feb 18 20:45:01 crc kubenswrapper[4932]: I0218 20:45:01.691920 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" event={"ID":"3d2e2003-21f3-440a-85dc-1b34c00c6199","Type":"ContainerStarted","Data":"3a8a2b0c9977a7299a21c42c295c178472fc2536dbf7ac6ada2076a256f4ee05"} Feb 18 20:45:02 crc kubenswrapper[4932]: E0218 20:45:02.181492 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.119950 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.247557 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d2e2003-21f3-440a-85dc-1b34c00c6199-config-volume\") pod \"3d2e2003-21f3-440a-85dc-1b34c00c6199\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.248034 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d2e2003-21f3-440a-85dc-1b34c00c6199-secret-volume\") pod \"3d2e2003-21f3-440a-85dc-1b34c00c6199\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.248158 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6gnj\" (UniqueName: \"kubernetes.io/projected/3d2e2003-21f3-440a-85dc-1b34c00c6199-kube-api-access-d6gnj\") pod \"3d2e2003-21f3-440a-85dc-1b34c00c6199\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.248766 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d2e2003-21f3-440a-85dc-1b34c00c6199-config-volume" (OuterVolumeSpecName: "config-volume") pod "3d2e2003-21f3-440a-85dc-1b34c00c6199" (UID: "3d2e2003-21f3-440a-85dc-1b34c00c6199"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.255161 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d2e2003-21f3-440a-85dc-1b34c00c6199-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3d2e2003-21f3-440a-85dc-1b34c00c6199" (UID: "3d2e2003-21f3-440a-85dc-1b34c00c6199"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.255685 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d2e2003-21f3-440a-85dc-1b34c00c6199-kube-api-access-d6gnj" (OuterVolumeSpecName: "kube-api-access-d6gnj") pod "3d2e2003-21f3-440a-85dc-1b34c00c6199" (UID: "3d2e2003-21f3-440a-85dc-1b34c00c6199"). InnerVolumeSpecName "kube-api-access-d6gnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.351969 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d2e2003-21f3-440a-85dc-1b34c00c6199-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.352018 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6gnj\" (UniqueName: \"kubernetes.io/projected/3d2e2003-21f3-440a-85dc-1b34c00c6199-kube-api-access-d6gnj\") on node \"crc\" DevicePath \"\"" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.352037 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d2e2003-21f3-440a-85dc-1b34c00c6199-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.718802 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" event={"ID":"3d2e2003-21f3-440a-85dc-1b34c00c6199","Type":"ContainerDied","Data":"3a8a2b0c9977a7299a21c42c295c178472fc2536dbf7ac6ada2076a256f4ee05"} Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.718869 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a8a2b0c9977a7299a21c42c295c178472fc2536dbf7ac6ada2076a256f4ee05" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.718955 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:04 crc kubenswrapper[4932]: I0218 20:45:04.239965 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf"] Feb 18 20:45:04 crc kubenswrapper[4932]: I0218 20:45:04.251848 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf"] Feb 18 20:45:05 crc kubenswrapper[4932]: E0218 20:45:05.183061 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:45:05 crc kubenswrapper[4932]: I0218 20:45:05.197649 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9637eec3-3d3f-435b-9a57-ef318aa5300c" path="/var/lib/kubelet/pods/9637eec3-3d3f-435b-9a57-ef318aa5300c/volumes" Feb 18 20:45:06 crc kubenswrapper[4932]: I0218 20:45:06.986545 4932 scope.go:117] "RemoveContainer" containerID="a9f13f16fae2f188590028710fb520ed99f739785e726a38525e8fd3c5b3e49f" Feb 18 20:45:09 crc kubenswrapper[4932]: E0218 20:45:09.187424 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:45:10 crc kubenswrapper[4932]: E0218 20:45:10.181982 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:45:16 crc kubenswrapper[4932]: E0218 20:45:16.183457 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:45:20 crc kubenswrapper[4932]: E0218 20:45:20.182579 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:45:24 crc kubenswrapper[4932]: E0218 20:45:24.181662 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:45:24 crc kubenswrapper[4932]: E0218 20:45:24.181693 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:45:31 crc kubenswrapper[4932]: E0218 20:45:31.182313 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:45:34 crc kubenswrapper[4932]: E0218 20:45:34.181952 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:45:35 crc kubenswrapper[4932]: E0218 20:45:35.186075 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:45:36 crc kubenswrapper[4932]: E0218 20:45:36.181152 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:45:42 crc kubenswrapper[4932]: E0218 20:45:42.183126 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:45:46 crc kubenswrapper[4932]: E0218 20:45:46.182614 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:45:48 crc kubenswrapper[4932]: E0218 20:45:48.984034 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:45:48 crc kubenswrapper[4932]: E0218 20:45:48.984724 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:45:48 crc kubenswrapper[4932]: E0218 20:45:48.985995 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:45:50 crc kubenswrapper[4932]: E0218 20:45:50.183061 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:45:54 crc kubenswrapper[4932]: E0218 20:45:54.182713 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:46:01 crc kubenswrapper[4932]: E0218 20:46:01.184199 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:46:03 crc kubenswrapper[4932]: E0218 20:46:03.183093 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:46:04 crc kubenswrapper[4932]: E0218 20:46:04.182665 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:46:07 crc kubenswrapper[4932]: E0218 20:46:07.196709 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:46:16 crc kubenswrapper[4932]: E0218 20:46:16.183682 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:46:16 crc kubenswrapper[4932]: E0218 20:46:16.183705 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:46:16 crc kubenswrapper[4932]: E0218 20:46:16.184665 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:46:22 crc kubenswrapper[4932]: E0218 20:46:22.181538 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:46:27 crc kubenswrapper[4932]: I0218 20:46:27.606643 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:46:27 crc kubenswrapper[4932]: I0218 20:46:27.607415 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:46:29 crc kubenswrapper[4932]: E0218 20:46:29.183130 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:46:29 crc kubenswrapper[4932]: E0218 20:46:29.183512 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:46:29 crc kubenswrapper[4932]: E0218 20:46:29.184169 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:46:32 crc kubenswrapper[4932]: E0218 20:46:32.090515 4932 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.190:59140->38.102.83.190:41227: write tcp 38.102.83.190:59140->38.102.83.190:41227: write: broken pipe Feb 18 20:46:35 crc kubenswrapper[4932]: E0218 20:46:35.183072 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:46:42 crc kubenswrapper[4932]: E0218 20:46:42.181484 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:46:42 crc kubenswrapper[4932]: E0218 20:46:42.873230 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:46:42 crc kubenswrapper[4932]: E0218 20:46:42.873543 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:46:42 crc kubenswrapper[4932]: E0218 20:46:42.874800 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:46:44 crc kubenswrapper[4932]: E0218 20:46:44.182388 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:46:48 crc kubenswrapper[4932]: E0218 20:46:48.183133 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:46:55 crc kubenswrapper[4932]: E0218 20:46:55.184096 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:46:56 crc kubenswrapper[4932]: E0218 20:46:56.182275 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:46:57 crc kubenswrapper[4932]: I0218 20:46:57.606158 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:46:57 crc kubenswrapper[4932]: I0218 20:46:57.606282 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:46:58 crc kubenswrapper[4932]: E0218 20:46:58.183205 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:47:05 crc kubenswrapper[4932]: E0218 20:47:05.238613 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:47:05 crc kubenswrapper[4932]: E0218 20:47:05.239426 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:47:05 crc kubenswrapper[4932]: E0218 20:47:05.240721 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:47:08 crc kubenswrapper[4932]: E0218 20:47:08.182857 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:47:09 crc kubenswrapper[4932]: E0218 20:47:09.182731 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:47:12 crc kubenswrapper[4932]: E0218 20:47:12.182347 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:47:18 crc kubenswrapper[4932]: E0218 20:47:18.184075 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:47:19 crc kubenswrapper[4932]: E0218 20:47:19.196487 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:47:23 crc kubenswrapper[4932]: E0218 20:47:23.184112 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:47:24 crc kubenswrapper[4932]: E0218 20:47:24.180961 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:47:27 crc kubenswrapper[4932]: I0218 20:47:27.605925 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:47:27 crc kubenswrapper[4932]: I0218 20:47:27.606700 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:47:27 crc kubenswrapper[4932]: I0218 20:47:27.606767 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:47:27 crc kubenswrapper[4932]: I0218 20:47:27.607907 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:47:27 crc kubenswrapper[4932]: I0218 20:47:27.607998 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" gracePeriod=600 Feb 18 20:47:27 crc kubenswrapper[4932]: E0218 20:47:27.744465 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:47:28 crc kubenswrapper[4932]: I0218 20:47:28.589385 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" exitCode=0 Feb 18 20:47:28 crc kubenswrapper[4932]: I0218 20:47:28.589488 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e"} Feb 18 20:47:28 crc kubenswrapper[4932]: I0218 20:47:28.590563 4932 scope.go:117] "RemoveContainer" containerID="8907f611fdc4bb018a8a5f1574c9b0677d04bcde1f4d724c23d8ea1124f73d8b" Feb 18 20:47:28 crc kubenswrapper[4932]: I0218 20:47:28.597867 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:47:28 crc kubenswrapper[4932]: E0218 20:47:28.599031 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:47:30 crc kubenswrapper[4932]: E0218 20:47:30.182249 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:47:32 crc kubenswrapper[4932]: E0218 20:47:32.182065 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:47:34 crc kubenswrapper[4932]: E0218 20:47:34.183436 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:47:35 crc kubenswrapper[4932]: E0218 20:47:35.182449 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:47:42 crc kubenswrapper[4932]: I0218 20:47:42.179656 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:47:42 crc kubenswrapper[4932]: E0218 20:47:42.181119 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:47:42 crc kubenswrapper[4932]: E0218 20:47:42.183402 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:47:43 crc kubenswrapper[4932]: E0218 20:47:43.185576 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:47:45 crc kubenswrapper[4932]: E0218 20:47:45.181940 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:47:46 crc kubenswrapper[4932]: E0218 20:47:46.181294 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:47:54 crc kubenswrapper[4932]: I0218 20:47:54.179453 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:47:54 crc kubenswrapper[4932]: E0218 20:47:54.180196 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:47:54 crc kubenswrapper[4932]: E0218 20:47:54.184497 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:47:56 crc kubenswrapper[4932]: E0218 20:47:56.183009 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:47:59 crc kubenswrapper[4932]: E0218 20:47:59.184571 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:48:00 crc kubenswrapper[4932]: E0218 20:48:00.181883 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:48:06 crc kubenswrapper[4932]: I0218 20:48:06.179796 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:48:06 crc kubenswrapper[4932]: E0218 20:48:06.180643 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:48:06 crc kubenswrapper[4932]: E0218 20:48:06.182283 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:48:08 crc kubenswrapper[4932]: E0218 20:48:08.197494 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:48:12 crc kubenswrapper[4932]: E0218 20:48:12.182744 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:48:14 crc kubenswrapper[4932]: E0218 20:48:14.184493 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:48:19 crc kubenswrapper[4932]: E0218 20:48:19.182077 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:48:20 crc kubenswrapper[4932]: I0218 20:48:20.179382 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:48:20 crc kubenswrapper[4932]: E0218 20:48:20.180313 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:48:20 crc kubenswrapper[4932]: E0218 20:48:20.180619 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:48:24 crc kubenswrapper[4932]: E0218 20:48:24.181637 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:48:27 crc kubenswrapper[4932]: E0218 20:48:27.192162 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:48:30 crc kubenswrapper[4932]: E0218 20:48:30.183453 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:48:32 crc kubenswrapper[4932]: I0218 20:48:32.179582 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:48:32 crc kubenswrapper[4932]: E0218 20:48:32.180674 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:48:35 crc kubenswrapper[4932]: E0218 20:48:35.183421 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:48:36 crc kubenswrapper[4932]: E0218 20:48:36.181795 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:48:42 crc kubenswrapper[4932]: E0218 20:48:42.180848 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:48:42 crc kubenswrapper[4932]: E0218 20:48:42.181277 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:48:44 crc kubenswrapper[4932]: I0218 20:48:44.180491 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:48:44 crc kubenswrapper[4932]: E0218 20:48:44.180982 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:48:46 crc kubenswrapper[4932]: E0218 20:48:46.186690 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:48:48 crc kubenswrapper[4932]: E0218 20:48:48.184256 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:48:56 crc kubenswrapper[4932]: I0218 20:48:56.181074 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:48:56 crc kubenswrapper[4932]: E0218 20:48:56.182462 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:48:56 crc kubenswrapper[4932]: E0218 20:48:56.183692 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:48:56 crc kubenswrapper[4932]: I0218 20:48:56.184292 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:48:56 crc kubenswrapper[4932]: E0218 20:48:56.713820 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:48:56 crc kubenswrapper[4932]: E0218 20:48:56.714368 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:48:56 crc kubenswrapper[4932]: E0218 20:48:56.717984 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:49:01 crc kubenswrapper[4932]: E0218 20:49:01.181448 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:49:01 crc kubenswrapper[4932]: E0218 20:49:01.181766 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:49:09 crc kubenswrapper[4932]: E0218 20:49:09.182054 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:49:09 crc kubenswrapper[4932]: E0218 20:49:09.182536 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:49:10 crc kubenswrapper[4932]: I0218 20:49:10.179299 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:49:10 crc kubenswrapper[4932]: E0218 20:49:10.179864 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:49:12 crc kubenswrapper[4932]: E0218 20:49:12.182263 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:49:16 crc kubenswrapper[4932]: E0218 20:49:16.183967 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:49:20 crc kubenswrapper[4932]: E0218 20:49:20.181292 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:49:22 crc kubenswrapper[4932]: E0218 20:49:22.183156 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:49:24 crc kubenswrapper[4932]: E0218 20:49:24.182677 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:49:25 crc kubenswrapper[4932]: I0218 20:49:25.180377 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:49:25 crc kubenswrapper[4932]: E0218 20:49:25.180977 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:49:31 crc kubenswrapper[4932]: E0218 20:49:31.184323 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:49:31 crc kubenswrapper[4932]: E0218 20:49:31.184415 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:49:34 crc kubenswrapper[4932]: E0218 20:49:34.180916 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:49:38 crc kubenswrapper[4932]: I0218 20:49:38.178845 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:49:38 crc kubenswrapper[4932]: E0218 20:49:38.179725 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:49:38 crc kubenswrapper[4932]: E0218 20:49:38.182047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:49:43 crc kubenswrapper[4932]: E0218 20:49:43.181375 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:49:43 crc kubenswrapper[4932]: E0218 20:49:43.182782 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:49:47 crc kubenswrapper[4932]: E0218 20:49:47.192247 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:49:49 crc kubenswrapper[4932]: I0218 20:49:49.179940 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:49:49 crc kubenswrapper[4932]: E0218 20:49:49.180819 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:49:51 crc kubenswrapper[4932]: E0218 20:49:51.183147 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:49:56 crc kubenswrapper[4932]: E0218 20:49:56.182497 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:49:57 crc kubenswrapper[4932]: E0218 20:49:57.200164 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:50:00 crc kubenswrapper[4932]: E0218 20:50:00.181696 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:50:02 crc kubenswrapper[4932]: I0218 20:50:02.179390 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:50:02 crc kubenswrapper[4932]: E0218 20:50:02.180088 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:50:05 crc kubenswrapper[4932]: E0218 20:50:05.184540 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:50:11 crc kubenswrapper[4932]: E0218 20:50:11.181560 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:50:11 crc kubenswrapper[4932]: E0218 20:50:11.181602 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:50:13 crc kubenswrapper[4932]: E0218 20:50:13.180995 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:50:16 crc kubenswrapper[4932]: I0218 20:50:16.179313 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:50:16 crc kubenswrapper[4932]: E0218 20:50:16.179809 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:50:18 crc kubenswrapper[4932]: E0218 20:50:18.182640 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:50:23 crc kubenswrapper[4932]: E0218 20:50:23.184484 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:50:25 crc kubenswrapper[4932]: E0218 20:50:25.182446 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:50:26 crc kubenswrapper[4932]: E0218 20:50:26.180929 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:50:30 crc kubenswrapper[4932]: I0218 20:50:30.179712 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:50:30 crc kubenswrapper[4932]: E0218 20:50:30.180827 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:50:33 crc kubenswrapper[4932]: E0218 20:50:33.183809 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:50:37 crc kubenswrapper[4932]: E0218 20:50:37.189646 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:50:38 crc kubenswrapper[4932]: E0218 20:50:38.182722 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:50:38 crc kubenswrapper[4932]: E0218 20:50:38.182955 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:50:43 crc kubenswrapper[4932]: I0218 20:50:43.180635 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:50:43 crc kubenswrapper[4932]: E0218 20:50:43.181786 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:50:46 crc kubenswrapper[4932]: E0218 20:50:46.181925 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:50:50 crc kubenswrapper[4932]: E0218 20:50:50.181365 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:50:51 crc kubenswrapper[4932]: E0218 20:50:51.183454 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:50:51 crc kubenswrapper[4932]: E0218 20:50:51.606773 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:50:51 crc kubenswrapper[4932]: E0218 20:50:51.606940 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:50:51 crc kubenswrapper[4932]: E0218 20:50:51.608228 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:50:57 crc kubenswrapper[4932]: I0218 20:50:57.198315 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:50:57 crc kubenswrapper[4932]: E0218 20:50:57.201277 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:51:01 crc kubenswrapper[4932]: E0218 20:51:01.182669 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:51:02 crc kubenswrapper[4932]: E0218 20:51:02.182332 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:51:03 crc kubenswrapper[4932]: E0218 20:51:03.181625 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:51:05 crc kubenswrapper[4932]: E0218 20:51:05.183257 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:51:08 crc kubenswrapper[4932]: I0218 20:51:08.180723 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:51:08 crc kubenswrapper[4932]: E0218 20:51:08.181959 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:51:13 crc kubenswrapper[4932]: E0218 20:51:13.185017 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:51:14 crc kubenswrapper[4932]: E0218 20:51:14.183794 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:51:15 crc kubenswrapper[4932]: E0218 20:51:15.182464 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:51:17 crc kubenswrapper[4932]: E0218 20:51:17.198210 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:51:20 crc kubenswrapper[4932]: I0218 20:51:20.179413 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:51:20 crc kubenswrapper[4932]: E0218 20:51:20.180402 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:51:26 crc kubenswrapper[4932]: E0218 20:51:26.184086 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:51:27 crc kubenswrapper[4932]: E0218 20:51:27.199565 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:51:27 crc kubenswrapper[4932]: E0218 20:51:27.201627 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:51:32 crc kubenswrapper[4932]: E0218 20:51:32.183002 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:51:33 crc kubenswrapper[4932]: I0218 20:51:33.179768 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:51:33 crc kubenswrapper[4932]: E0218 20:51:33.180407 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:51:38 crc kubenswrapper[4932]: E0218 20:51:38.182090 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:51:38 crc kubenswrapper[4932]: E0218 20:51:38.182343 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:51:42 crc kubenswrapper[4932]: E0218 20:51:42.185412 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:51:46 crc kubenswrapper[4932]: E0218 20:51:46.182736 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:51:47 crc kubenswrapper[4932]: I0218 20:51:47.192407 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:51:47 crc kubenswrapper[4932]: E0218 20:51:47.193042 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:51:51 crc kubenswrapper[4932]: E0218 20:51:51.184474 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:51:55 crc kubenswrapper[4932]: E0218 20:51:55.292714 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:51:55 crc kubenswrapper[4932]: E0218 20:51:55.293653 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:51:55 crc kubenswrapper[4932]: E0218 20:51:55.295249 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:51:56 crc kubenswrapper[4932]: E0218 20:51:56.182297 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:51:59 crc kubenswrapper[4932]: E0218 20:51:59.183356 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:52:01 crc kubenswrapper[4932]: I0218 20:52:01.179316 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:52:01 crc kubenswrapper[4932]: E0218 20:52:01.180164 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:52:06 crc kubenswrapper[4932]: E0218 20:52:06.182462 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:52:08 crc kubenswrapper[4932]: E0218 20:52:08.183071 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:52:10 crc kubenswrapper[4932]: E0218 20:52:10.182327 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:52:10 crc kubenswrapper[4932]: E0218 20:52:10.566732 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:52:10 crc kubenswrapper[4932]: E0218 20:52:10.567160 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:52:10 crc kubenswrapper[4932]: E0218 20:52:10.568667 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:52:12 crc kubenswrapper[4932]: I0218 20:52:12.179949 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:52:12 crc kubenswrapper[4932]: E0218 20:52:12.180916 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:52:19 crc kubenswrapper[4932]: E0218 20:52:19.182416 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:52:22 crc kubenswrapper[4932]: E0218 20:52:22.182245 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:52:23 crc kubenswrapper[4932]: E0218 20:52:23.182407 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:52:23 crc kubenswrapper[4932]: E0218 20:52:23.182516 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:52:24 crc kubenswrapper[4932]: I0218 20:52:24.180558 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:52:24 crc kubenswrapper[4932]: E0218 20:52:24.181210 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:52:32 crc kubenswrapper[4932]: E0218 20:52:32.181229 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:52:35 crc kubenswrapper[4932]: E0218 20:52:35.186408 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:52:35 crc kubenswrapper[4932]: E0218 20:52:35.191304 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:52:37 crc kubenswrapper[4932]: E0218 20:52:37.196786 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:52:38 crc kubenswrapper[4932]: I0218 20:52:38.179831 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:52:39 crc kubenswrapper[4932]: I0218 20:52:39.376932 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"bd0ed877dfd0999db86d9dba9c669bbfc3a7d3697f420a1801bd8b9413d63bf6"} Feb 18 20:52:46 crc kubenswrapper[4932]: E0218 20:52:46.184548 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:52:47 crc kubenswrapper[4932]: E0218 20:52:47.198388 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:52:50 crc kubenswrapper[4932]: E0218 20:52:50.182846 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:52:50 crc kubenswrapper[4932]: E0218 20:52:50.184155 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:52:59 crc kubenswrapper[4932]: E0218 20:52:59.183234 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:53:01 crc kubenswrapper[4932]: E0218 20:53:01.183322 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:53:02 crc kubenswrapper[4932]: E0218 20:53:02.183465 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:53:04 crc kubenswrapper[4932]: E0218 20:53:04.181389 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:53:10 crc kubenswrapper[4932]: E0218 20:53:10.183305 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:53:12 crc kubenswrapper[4932]: E0218 20:53:12.180168 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:53:14 crc kubenswrapper[4932]: E0218 20:53:14.182902 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:53:19 crc kubenswrapper[4932]: E0218 20:53:19.183460 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:53:22 crc kubenswrapper[4932]: E0218 20:53:22.182842 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:53:26 crc kubenswrapper[4932]: E0218 20:53:26.184405 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:53:27 crc kubenswrapper[4932]: E0218 20:53:27.190321 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:53:30 crc kubenswrapper[4932]: E0218 20:53:30.182296 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:53:36 crc kubenswrapper[4932]: E0218 20:53:36.182424 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:53:38 crc kubenswrapper[4932]: E0218 20:53:38.182265 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:53:39 crc kubenswrapper[4932]: E0218 20:53:39.181655 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:53:43 crc kubenswrapper[4932]: E0218 20:53:43.192425 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:53:50 crc kubenswrapper[4932]: E0218 20:53:50.181249 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:53:50 crc kubenswrapper[4932]: E0218 20:53:50.181573 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:53:52 crc kubenswrapper[4932]: E0218 20:53:52.182504 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:53:55 crc kubenswrapper[4932]: E0218 20:53:55.183029 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:54:02 crc kubenswrapper[4932]: I0218 20:54:02.181221 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:54:03 crc kubenswrapper[4932]: E0218 20:54:03.181152 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:54:04 crc kubenswrapper[4932]: E0218 20:54:04.181645 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:54:06 crc kubenswrapper[4932]: E0218 20:54:06.112333 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:54:06 crc kubenswrapper[4932]: E0218 20:54:06.112586 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:54:06 crc kubenswrapper[4932]: E0218 20:54:06.113695 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:54:09 crc kubenswrapper[4932]: E0218 20:54:09.181940 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:54:17 crc kubenswrapper[4932]: E0218 20:54:17.196529 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:54:17 crc kubenswrapper[4932]: E0218 20:54:17.197073 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:54:19 crc kubenswrapper[4932]: E0218 20:54:19.182600 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:54:22 crc kubenswrapper[4932]: E0218 20:54:22.181901 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:54:29 crc kubenswrapper[4932]: E0218 20:54:29.182521 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:54:31 crc kubenswrapper[4932]: E0218 20:54:31.182958 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:54:32 crc kubenswrapper[4932]: E0218 20:54:32.182919 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:54:34 crc kubenswrapper[4932]: E0218 20:54:34.181723 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:54:40 crc kubenswrapper[4932]: E0218 20:54:40.193261 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:54:45 crc kubenswrapper[4932]: E0218 20:54:45.188754 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:54:45 crc kubenswrapper[4932]: E0218 20:54:45.188858 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:54:48 crc kubenswrapper[4932]: E0218 20:54:48.181694 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:54:54 crc kubenswrapper[4932]: E0218 20:54:54.190757 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:54:57 crc kubenswrapper[4932]: I0218 20:54:57.606780 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:54:57 crc kubenswrapper[4932]: I0218 20:54:57.607538 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:55:00 crc kubenswrapper[4932]: E0218 20:55:00.184394 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:55:00 crc kubenswrapper[4932]: E0218 20:55:00.184441 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:55:02 crc kubenswrapper[4932]: E0218 20:55:02.182533 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:55:08 crc kubenswrapper[4932]: E0218 20:55:08.182410 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:55:11 crc kubenswrapper[4932]: E0218 20:55:11.183046 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:55:12 crc kubenswrapper[4932]: E0218 20:55:12.183015 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:55:16 crc kubenswrapper[4932]: E0218 20:55:16.182102 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:55:21 crc kubenswrapper[4932]: E0218 20:55:21.183260 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:55:25 crc kubenswrapper[4932]: E0218 20:55:25.187747 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:55:26 crc kubenswrapper[4932]: E0218 20:55:26.183468 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:55:27 crc kubenswrapper[4932]: I0218 20:55:27.606245 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:55:27 crc kubenswrapper[4932]: I0218 20:55:27.606638 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:55:29 crc kubenswrapper[4932]: E0218 20:55:29.182011 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:55:33 crc kubenswrapper[4932]: E0218 20:55:33.186216 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:55:37 crc kubenswrapper[4932]: E0218 20:55:37.198064 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:55:39 crc kubenswrapper[4932]: E0218 20:55:39.181889 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:55:41 crc kubenswrapper[4932]: E0218 20:55:41.182827 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:55:48 crc kubenswrapper[4932]: E0218 20:55:48.183025 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:55:50 crc kubenswrapper[4932]: E0218 20:55:50.181696 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:55:52 crc kubenswrapper[4932]: E0218 20:55:52.182658 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:55:57 crc kubenswrapper[4932]: E0218 20:55:57.391127 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:55:57 crc kubenswrapper[4932]: E0218 20:55:57.391824 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:55:57 crc kubenswrapper[4932]: E0218 20:55:57.392902 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:55:57 crc kubenswrapper[4932]: I0218 20:55:57.605902 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:55:57 crc kubenswrapper[4932]: I0218 20:55:57.605993 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:55:57 crc kubenswrapper[4932]: I0218 20:55:57.606055 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:55:57 crc kubenswrapper[4932]: I0218 20:55:57.607235 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd0ed877dfd0999db86d9dba9c669bbfc3a7d3697f420a1801bd8b9413d63bf6"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:55:57 crc kubenswrapper[4932]: I0218 20:55:57.607339 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://bd0ed877dfd0999db86d9dba9c669bbfc3a7d3697f420a1801bd8b9413d63bf6" gracePeriod=600 Feb 18 20:55:58 crc kubenswrapper[4932]: I0218 20:55:58.893555 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="bd0ed877dfd0999db86d9dba9c669bbfc3a7d3697f420a1801bd8b9413d63bf6" exitCode=0 Feb 18 20:55:58 crc kubenswrapper[4932]: I0218 20:55:58.893634 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"bd0ed877dfd0999db86d9dba9c669bbfc3a7d3697f420a1801bd8b9413d63bf6"} Feb 18 20:55:58 crc kubenswrapper[4932]: I0218 20:55:58.895421 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:55:59 crc kubenswrapper[4932]: I0218 20:55:59.911480 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89"} Feb 18 20:56:03 crc kubenswrapper[4932]: E0218 20:56:03.183720 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:56:03 crc kubenswrapper[4932]: E0218 20:56:03.184420 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:56:05 crc kubenswrapper[4932]: E0218 20:56:05.182922 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:56:10 crc kubenswrapper[4932]: E0218 20:56:10.182261 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:56:14 crc kubenswrapper[4932]: E0218 20:56:14.186431 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:56:17 crc kubenswrapper[4932]: E0218 20:56:17.199445 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:56:18 crc kubenswrapper[4932]: E0218 20:56:18.181052 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:56:25 crc kubenswrapper[4932]: E0218 20:56:25.183034 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:56:28 crc kubenswrapper[4932]: E0218 20:56:28.183628 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:56:32 crc kubenswrapper[4932]: E0218 20:56:32.182548 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:56:33 crc kubenswrapper[4932]: E0218 20:56:33.190524 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:56:36 crc kubenswrapper[4932]: E0218 20:56:36.182952 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:56:40 crc kubenswrapper[4932]: E0218 20:56:40.182007 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:56:44 crc kubenswrapper[4932]: E0218 20:56:44.189985 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:56:44 crc kubenswrapper[4932]: E0218 20:56:44.190067 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:56:48 crc kubenswrapper[4932]: E0218 20:56:48.183412 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:56:55 crc kubenswrapper[4932]: E0218 20:56:55.182023 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:56:58 crc kubenswrapper[4932]: E0218 20:56:58.182509 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:56:59 crc kubenswrapper[4932]: E0218 20:56:59.181498 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:57:00 crc kubenswrapper[4932]: E0218 20:57:00.383363 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:57:00 crc kubenswrapper[4932]: E0218 20:57:00.383888 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:57:00 crc kubenswrapper[4932]: E0218 20:57:00.385145 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:57:06 crc kubenswrapper[4932]: E0218 20:57:06.181761 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:57:13 crc kubenswrapper[4932]: E0218 20:57:13.181555 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:57:14 crc kubenswrapper[4932]: E0218 20:57:14.182335 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:57:15 crc kubenswrapper[4932]: E0218 20:57:15.896097 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:57:15 crc kubenswrapper[4932]: E0218 20:57:15.896882 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:57:15 crc kubenswrapper[4932]: E0218 20:57:15.898275 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:57:18 crc kubenswrapper[4932]: E0218 20:57:18.183070 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:57:25 crc kubenswrapper[4932]: E0218 20:57:25.184072 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:57:26 crc kubenswrapper[4932]: E0218 20:57:26.183237 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:57:27 crc kubenswrapper[4932]: E0218 20:57:27.198757 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:57:29 crc kubenswrapper[4932]: E0218 20:57:29.184046 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:57:38 crc kubenswrapper[4932]: E0218 20:57:38.183691 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:57:40 crc kubenswrapper[4932]: E0218 20:57:40.187622 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:57:42 crc kubenswrapper[4932]: E0218 20:57:42.181891 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:57:42 crc kubenswrapper[4932]: E0218 20:57:42.182424 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:57:49 crc kubenswrapper[4932]: E0218 20:57:49.182802 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:57:51 crc kubenswrapper[4932]: E0218 20:57:51.184396 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:57:53 crc kubenswrapper[4932]: E0218 20:57:53.183152 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:57:53 crc kubenswrapper[4932]: E0218 20:57:53.183225 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:58:04 crc kubenswrapper[4932]: E0218 20:58:04.188319 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:58:04 crc kubenswrapper[4932]: E0218 20:58:04.188356 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:58:04 crc kubenswrapper[4932]: E0218 20:58:04.188514 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:58:05 crc kubenswrapper[4932]: E0218 20:58:05.184313 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:58:16 crc kubenswrapper[4932]: E0218 20:58:16.182731 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:58:17 crc kubenswrapper[4932]: E0218 20:58:17.192133 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:58:18 crc kubenswrapper[4932]: E0218 20:58:18.181688 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:58:18 crc kubenswrapper[4932]: E0218 20:58:18.183668 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:58:27 crc kubenswrapper[4932]: E0218 20:58:27.197851 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:58:27 crc kubenswrapper[4932]: I0218 20:58:27.605843 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:58:27 crc kubenswrapper[4932]: I0218 20:58:27.606343 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:58:28 crc kubenswrapper[4932]: E0218 20:58:28.182556 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:58:30 crc kubenswrapper[4932]: E0218 20:58:30.181496 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:58:33 crc kubenswrapper[4932]: E0218 20:58:33.182784 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:58:39 crc kubenswrapper[4932]: E0218 20:58:39.184162 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:58:42 crc kubenswrapper[4932]: E0218 20:58:42.181474 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:58:44 crc kubenswrapper[4932]: E0218 20:58:44.183951 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:58:48 crc kubenswrapper[4932]: E0218 20:58:48.182553 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:58:53 crc kubenswrapper[4932]: E0218 20:58:53.183561 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:58:55 crc kubenswrapper[4932]: E0218 20:58:55.196024 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:58:56 crc kubenswrapper[4932]: E0218 20:58:56.183321 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:58:57 crc kubenswrapper[4932]: I0218 20:58:57.605924 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:58:57 crc kubenswrapper[4932]: I0218 20:58:57.606371 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:59:00 crc kubenswrapper[4932]: E0218 20:59:00.185459 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:59:06 crc kubenswrapper[4932]: E0218 20:59:06.182480 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:59:09 crc kubenswrapper[4932]: E0218 20:59:09.182735 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:59:11 crc kubenswrapper[4932]: I0218 20:59:11.182986 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:59:11 crc kubenswrapper[4932]: E0218 20:59:11.588726 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:59:11 crc kubenswrapper[4932]: E0218 20:59:11.589082 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)" logger="UnhandledError" Feb 18 20:59:11 crc kubenswrapper[4932]: E0218 20:59:11.591055 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:59:13 crc kubenswrapper[4932]: E0218 20:59:13.185740 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:59:19 crc kubenswrapper[4932]: E0218 20:59:19.182547 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:59:24 crc kubenswrapper[4932]: E0218 20:59:24.184093 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:59:25 crc kubenswrapper[4932]: E0218 20:59:25.185829 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:59:27 crc kubenswrapper[4932]: I0218 20:59:27.606691 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:59:27 crc kubenswrapper[4932]: I0218 20:59:27.607108 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:59:27 crc kubenswrapper[4932]: I0218 20:59:27.607216 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:59:27 crc kubenswrapper[4932]: I0218 20:59:27.608099 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:59:27 crc kubenswrapper[4932]: I0218 20:59:27.608222 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" gracePeriod=600 Feb 18 20:59:27 crc kubenswrapper[4932]: E0218 20:59:27.734550 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:59:28 crc kubenswrapper[4932]: E0218 20:59:28.180936 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:59:28 crc kubenswrapper[4932]: I0218 20:59:28.458048 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" exitCode=0 Feb 18 20:59:28 crc kubenswrapper[4932]: I0218 20:59:28.458119 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89"} Feb 18 20:59:28 crc kubenswrapper[4932]: I0218 20:59:28.458243 4932 scope.go:117] "RemoveContainer" containerID="bd0ed877dfd0999db86d9dba9c669bbfc3a7d3697f420a1801bd8b9413d63bf6" Feb 18 20:59:28 crc kubenswrapper[4932]: I0218 20:59:28.459284 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 20:59:28 crc kubenswrapper[4932]: E0218 20:59:28.459864 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:59:34 crc kubenswrapper[4932]: E0218 20:59:34.184425 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:59:39 crc kubenswrapper[4932]: E0218 20:59:39.182368 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:59:39 crc kubenswrapper[4932]: E0218 20:59:39.182607 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:59:40 crc kubenswrapper[4932]: E0218 20:59:40.182379 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:59:42 crc kubenswrapper[4932]: I0218 20:59:42.179541 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 20:59:42 crc kubenswrapper[4932]: E0218 20:59:42.180112 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:59:45 crc kubenswrapper[4932]: E0218 20:59:45.181532 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:59:49 crc kubenswrapper[4932]: I0218 20:59:49.250940 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-76d44d77c9-sdq6t" podUID="d359b774-654c-4532-8f81-e1beddd68479" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 18 20:59:50 crc kubenswrapper[4932]: E0218 20:59:50.184232 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:59:51 crc kubenswrapper[4932]: E0218 20:59:51.182355 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:59:53 crc kubenswrapper[4932]: I0218 20:59:53.179890 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 20:59:53 crc kubenswrapper[4932]: E0218 20:59:53.180821 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:59:54 crc kubenswrapper[4932]: E0218 20:59:54.182929 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:59:57 crc kubenswrapper[4932]: E0218 20:59:57.187380 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.172448 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q"] Feb 18 21:00:00 crc kubenswrapper[4932]: E0218 21:00:00.212978 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2e2003-21f3-440a-85dc-1b34c00c6199" containerName="collect-profiles" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.213020 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2e2003-21f3-440a-85dc-1b34c00c6199" containerName="collect-profiles" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.213451 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d2e2003-21f3-440a-85dc-1b34c00c6199" containerName="collect-profiles" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.214282 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q"] Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.214432 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.216387 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.216948 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.293045 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgz97\" (UniqueName: \"kubernetes.io/projected/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-kube-api-access-qgz97\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.293751 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-secret-volume\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.293884 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-config-volume\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.395527 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-config-volume\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.396052 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgz97\" (UniqueName: \"kubernetes.io/projected/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-kube-api-access-qgz97\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.396203 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-secret-volume\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.397352 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-config-volume\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.403508 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-secret-volume\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.411809 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgz97\" (UniqueName: \"kubernetes.io/projected/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-kube-api-access-qgz97\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.533309 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:01 crc kubenswrapper[4932]: I0218 21:00:01.096201 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q"] Feb 18 21:00:01 crc kubenswrapper[4932]: E0218 21:00:01.183428 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:00:01 crc kubenswrapper[4932]: I0218 21:00:01.848315 4932 generic.go:334] "Generic (PLEG): container finished" podID="f40b15f6-29e9-4312-a3d6-b41afbbe13ee" containerID="54724fd9ba60888417cdeadea1f3c9160e76f53fb713dab5b7e78b0a664a686b" exitCode=0 Feb 18 21:00:01 crc kubenswrapper[4932]: I0218 21:00:01.848405 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" event={"ID":"f40b15f6-29e9-4312-a3d6-b41afbbe13ee","Type":"ContainerDied","Data":"54724fd9ba60888417cdeadea1f3c9160e76f53fb713dab5b7e78b0a664a686b"} Feb 18 21:00:01 crc kubenswrapper[4932]: I0218 21:00:01.848561 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" event={"ID":"f40b15f6-29e9-4312-a3d6-b41afbbe13ee","Type":"ContainerStarted","Data":"9be1a18aeb4dd58c3029304908e8ab2aae86f2e97d6fbd794eb63bdb53daa635"} Feb 18 21:00:02 crc kubenswrapper[4932]: E0218 21:00:02.182151 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.255883 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.358338 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgz97\" (UniqueName: \"kubernetes.io/projected/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-kube-api-access-qgz97\") pod \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.358406 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-secret-volume\") pod \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.358591 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-config-volume\") pod \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.360580 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-config-volume" (OuterVolumeSpecName: "config-volume") pod "f40b15f6-29e9-4312-a3d6-b41afbbe13ee" (UID: "f40b15f6-29e9-4312-a3d6-b41afbbe13ee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.365148 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f40b15f6-29e9-4312-a3d6-b41afbbe13ee" (UID: "f40b15f6-29e9-4312-a3d6-b41afbbe13ee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.365428 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-kube-api-access-qgz97" (OuterVolumeSpecName: "kube-api-access-qgz97") pod "f40b15f6-29e9-4312-a3d6-b41afbbe13ee" (UID: "f40b15f6-29e9-4312-a3d6-b41afbbe13ee"). InnerVolumeSpecName "kube-api-access-qgz97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.461731 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.462068 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgz97\" (UniqueName: \"kubernetes.io/projected/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-kube-api-access-qgz97\") on node \"crc\" DevicePath \"\"" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.462107 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.865165 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" event={"ID":"f40b15f6-29e9-4312-a3d6-b41afbbe13ee","Type":"ContainerDied","Data":"9be1a18aeb4dd58c3029304908e8ab2aae86f2e97d6fbd794eb63bdb53daa635"} Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.865219 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9be1a18aeb4dd58c3029304908e8ab2aae86f2e97d6fbd794eb63bdb53daa635" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.865268 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:04 crc kubenswrapper[4932]: I0218 21:00:04.351864 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl"] Feb 18 21:00:04 crc kubenswrapper[4932]: I0218 21:00:04.359590 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl"] Feb 18 21:00:05 crc kubenswrapper[4932]: I0218 21:00:05.211635 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43cf3e74-b4e7-4f54-b21c-cf9018235782" path="/var/lib/kubelet/pods/43cf3e74-b4e7-4f54-b21c-cf9018235782/volumes" Feb 18 21:00:07 crc kubenswrapper[4932]: I0218 21:00:07.410047 4932 scope.go:117] "RemoveContainer" containerID="b79875069ecc1431ced41ee0aadf13bbe89c7ee6b34078234cc6eb1c6d79dd0b" Feb 18 21:00:08 crc kubenswrapper[4932]: I0218 21:00:08.179909 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:00:08 crc kubenswrapper[4932]: E0218 21:00:08.180337 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:00:09 crc kubenswrapper[4932]: E0218 21:00:09.184153 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:00:09 crc kubenswrapper[4932]: E0218 21:00:09.184712 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:00:15 crc kubenswrapper[4932]: E0218 21:00:15.186617 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:00:16 crc kubenswrapper[4932]: E0218 21:00:16.181225 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:00:21 crc kubenswrapper[4932]: E0218 21:00:21.181309 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:00:21 crc kubenswrapper[4932]: E0218 21:00:21.181340 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:00:22 crc kubenswrapper[4932]: I0218 21:00:22.190831 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:00:22 crc kubenswrapper[4932]: E0218 21:00:22.207821 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:00:29 crc kubenswrapper[4932]: E0218 21:00:29.184008 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:00:31 crc kubenswrapper[4932]: E0218 21:00:31.181672 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:00:32 crc kubenswrapper[4932]: E0218 21:00:32.183995 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:00:34 crc kubenswrapper[4932]: E0218 21:00:34.183263 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:00:35 crc kubenswrapper[4932]: I0218 21:00:35.180491 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:00:35 crc kubenswrapper[4932]: E0218 21:00:35.181754 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:00:41 crc kubenswrapper[4932]: E0218 21:00:41.183897 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:00:43 crc kubenswrapper[4932]: E0218 21:00:43.183313 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:00:45 crc kubenswrapper[4932]: E0218 21:00:45.185849 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:00:46 crc kubenswrapper[4932]: I0218 21:00:46.180150 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:00:46 crc kubenswrapper[4932]: E0218 21:00:46.180837 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:00:46 crc kubenswrapper[4932]: E0218 21:00:46.183056 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:00:55 crc kubenswrapper[4932]: E0218 21:00:55.183378 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:00:55 crc kubenswrapper[4932]: E0218 21:00:55.183472 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:00:56 crc kubenswrapper[4932]: E0218 21:00:56.184862 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:00:58 crc kubenswrapper[4932]: I0218 21:00:58.180368 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:00:58 crc kubenswrapper[4932]: E0218 21:00:58.180841 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.164627 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524141-46g96"] Feb 18 21:01:00 crc kubenswrapper[4932]: E0218 21:01:00.165556 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f40b15f6-29e9-4312-a3d6-b41afbbe13ee" containerName="collect-profiles" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.165569 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f40b15f6-29e9-4312-a3d6-b41afbbe13ee" containerName="collect-profiles" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.165768 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f40b15f6-29e9-4312-a3d6-b41afbbe13ee" containerName="collect-profiles" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.166625 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.180376 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524141-46g96"] Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.308303 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-combined-ca-bundle\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.308594 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-config-data\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.308627 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-fernet-keys\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.308670 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrf7\" (UniqueName: \"kubernetes.io/projected/39b3cde9-8940-4757-8073-9f90910d6a30-kube-api-access-tvrf7\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.410515 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-config-data\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.410565 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-fernet-keys\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.410606 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvrf7\" (UniqueName: \"kubernetes.io/projected/39b3cde9-8940-4757-8073-9f90910d6a30-kube-api-access-tvrf7\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.410690 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-combined-ca-bundle\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.418665 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-config-data\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.421638 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-fernet-keys\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.425208 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-combined-ca-bundle\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.446912 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvrf7\" (UniqueName: \"kubernetes.io/projected/39b3cde9-8940-4757-8073-9f90910d6a30-kube-api-access-tvrf7\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.489252 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:01 crc kubenswrapper[4932]: I0218 21:01:01.063135 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524141-46g96"] Feb 18 21:01:01 crc kubenswrapper[4932]: I0218 21:01:01.588878 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524141-46g96" event={"ID":"39b3cde9-8940-4757-8073-9f90910d6a30","Type":"ContainerStarted","Data":"d2fd48309e425e11937534e0fcf43aa043e873c73ab1c12d5e2f7045fa7f9139"} Feb 18 21:01:01 crc kubenswrapper[4932]: I0218 21:01:01.589344 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524141-46g96" event={"ID":"39b3cde9-8940-4757-8073-9f90910d6a30","Type":"ContainerStarted","Data":"c5d586f009677da884c30ca0caa506073f0c1a94d127ca41f5c3693329f132b6"} Feb 18 21:01:05 crc kubenswrapper[4932]: I0218 21:01:05.651922 4932 generic.go:334] "Generic (PLEG): container finished" podID="39b3cde9-8940-4757-8073-9f90910d6a30" containerID="d2fd48309e425e11937534e0fcf43aa043e873c73ab1c12d5e2f7045fa7f9139" exitCode=0 Feb 18 21:01:05 crc kubenswrapper[4932]: I0218 21:01:05.652009 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524141-46g96" event={"ID":"39b3cde9-8940-4757-8073-9f90910d6a30","Type":"ContainerDied","Data":"d2fd48309e425e11937534e0fcf43aa043e873c73ab1c12d5e2f7045fa7f9139"} Feb 18 21:01:07 crc kubenswrapper[4932]: E0218 21:01:07.192836 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.211009 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.282838 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-fernet-keys\") pod \"39b3cde9-8940-4757-8073-9f90910d6a30\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.283431 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-combined-ca-bundle\") pod \"39b3cde9-8940-4757-8073-9f90910d6a30\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.283657 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvrf7\" (UniqueName: \"kubernetes.io/projected/39b3cde9-8940-4757-8073-9f90910d6a30-kube-api-access-tvrf7\") pod \"39b3cde9-8940-4757-8073-9f90910d6a30\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.283731 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-config-data\") pod \"39b3cde9-8940-4757-8073-9f90910d6a30\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.302523 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b3cde9-8940-4757-8073-9f90910d6a30-kube-api-access-tvrf7" (OuterVolumeSpecName: "kube-api-access-tvrf7") pod "39b3cde9-8940-4757-8073-9f90910d6a30" (UID: "39b3cde9-8940-4757-8073-9f90910d6a30"). InnerVolumeSpecName "kube-api-access-tvrf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.305317 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "39b3cde9-8940-4757-8073-9f90910d6a30" (UID: "39b3cde9-8940-4757-8073-9f90910d6a30"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.324058 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39b3cde9-8940-4757-8073-9f90910d6a30" (UID: "39b3cde9-8940-4757-8073-9f90910d6a30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.381807 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-config-data" (OuterVolumeSpecName: "config-data") pod "39b3cde9-8940-4757-8073-9f90910d6a30" (UID: "39b3cde9-8940-4757-8073-9f90910d6a30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.387696 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.387742 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvrf7\" (UniqueName: \"kubernetes.io/projected/39b3cde9-8940-4757-8073-9f90910d6a30-kube-api-access-tvrf7\") on node \"crc\" DevicePath \"\"" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.387761 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.387775 4932 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 21:01:07 crc kubenswrapper[4932]: E0218 21:01:07.406088 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 502 Bad Gateway" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 21:01:07 crc kubenswrapper[4932]: E0218 21:01:07.406280 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 502 Bad Gateway" logger="UnhandledError" Feb 18 21:01:07 crc kubenswrapper[4932]: E0218 21:01:07.407513 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 502 Bad Gateway\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.682817 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524141-46g96" event={"ID":"39b3cde9-8940-4757-8073-9f90910d6a30","Type":"ContainerDied","Data":"c5d586f009677da884c30ca0caa506073f0c1a94d127ca41f5c3693329f132b6"} Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.682874 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5d586f009677da884c30ca0caa506073f0c1a94d127ca41f5c3693329f132b6" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.682927 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:08 crc kubenswrapper[4932]: E0218 21:01:08.183374 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:01:11 crc kubenswrapper[4932]: I0218 21:01:11.179561 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:01:11 crc kubenswrapper[4932]: E0218 21:01:11.180640 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:01:11 crc kubenswrapper[4932]: E0218 21:01:11.184663 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:01:18 crc kubenswrapper[4932]: E0218 21:01:18.183403 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:01:21 crc kubenswrapper[4932]: E0218 21:01:21.184202 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:01:23 crc kubenswrapper[4932]: I0218 21:01:23.180409 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:01:23 crc kubenswrapper[4932]: E0218 21:01:23.181301 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:01:23 crc kubenswrapper[4932]: E0218 21:01:23.186289 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:01:26 crc kubenswrapper[4932]: E0218 21:01:26.184039 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:01:29 crc kubenswrapper[4932]: E0218 21:01:29.183695 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:01:33 crc kubenswrapper[4932]: E0218 21:01:33.184918 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:01:35 crc kubenswrapper[4932]: E0218 21:01:35.186271 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:01:37 crc kubenswrapper[4932]: I0218 21:01:37.190846 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:01:37 crc kubenswrapper[4932]: E0218 21:01:37.192237 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:01:40 crc kubenswrapper[4932]: E0218 21:01:40.182282 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:01:44 crc kubenswrapper[4932]: E0218 21:01:44.182461 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:01:44 crc kubenswrapper[4932]: E0218 21:01:44.182577 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:01:48 crc kubenswrapper[4932]: E0218 21:01:48.183222 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:01:51 crc kubenswrapper[4932]: I0218 21:01:51.180842 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:01:51 crc kubenswrapper[4932]: E0218 21:01:51.182002 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:01:51 crc kubenswrapper[4932]: E0218 21:01:51.184698 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:01:56 crc kubenswrapper[4932]: E0218 21:01:56.184056 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:01:56 crc kubenswrapper[4932]: E0218 21:01:56.184098 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:02:00 crc kubenswrapper[4932]: E0218 21:02:00.182799 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:02:02 crc kubenswrapper[4932]: I0218 21:02:02.180551 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:02:02 crc kubenswrapper[4932]: E0218 21:02:02.181356 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:02:05 crc kubenswrapper[4932]: E0218 21:02:05.807822 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:02:07 crc kubenswrapper[4932]: E0218 21:02:07.195693 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:02:13 crc kubenswrapper[4932]: E0218 21:02:13.182979 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:02:17 crc kubenswrapper[4932]: I0218 21:02:17.191825 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:02:17 crc kubenswrapper[4932]: E0218 21:02:17.192696 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:02:18 crc kubenswrapper[4932]: E0218 21:02:18.183065 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:02:19 crc kubenswrapper[4932]: E0218 21:02:19.386503 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 21:02:19 crc kubenswrapper[4932]: E0218 21:02:19.386894 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout" logger="UnhandledError" Feb 18 21:02:19 crc kubenswrapper[4932]: E0218 21:02:19.388146 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:02:24 crc kubenswrapper[4932]: E0218 21:02:24.182656 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:02:30 crc kubenswrapper[4932]: I0218 21:02:30.180112 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:02:30 crc kubenswrapper[4932]: E0218 21:02:30.180857 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:02:32 crc kubenswrapper[4932]: E0218 21:02:32.183793 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:02:34 crc kubenswrapper[4932]: E0218 21:02:34.183274 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:02:35 crc kubenswrapper[4932]: E0218 21:02:35.188878 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:02:36 crc kubenswrapper[4932]: E0218 21:02:36.291344 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 21:02:36 crc kubenswrapper[4932]: E0218 21:02:36.291956 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:02:36 crc kubenswrapper[4932]: E0218 21:02:36.293462 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:02:42 crc kubenswrapper[4932]: I0218 21:02:42.179440 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:02:42 crc kubenswrapper[4932]: E0218 21:02:42.180943 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:02:44 crc kubenswrapper[4932]: E0218 21:02:44.183240 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:02:46 crc kubenswrapper[4932]: E0218 21:02:46.180465 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:02:48 crc kubenswrapper[4932]: E0218 21:02:48.183055 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:02:49 crc kubenswrapper[4932]: E0218 21:02:49.182630 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:02:57 crc kubenswrapper[4932]: I0218 21:02:57.193684 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:02:57 crc kubenswrapper[4932]: E0218 21:02:57.195047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:02:58 crc kubenswrapper[4932]: E0218 21:02:58.182890 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:02:59 crc kubenswrapper[4932]: E0218 21:02:59.182924 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:03:00 crc kubenswrapper[4932]: E0218 21:03:00.185110 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:03:01 crc kubenswrapper[4932]: E0218 21:03:01.183135 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:03:09 crc kubenswrapper[4932]: E0218 21:03:09.185262 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:03:11 crc kubenswrapper[4932]: I0218 21:03:11.181771 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:03:11 crc kubenswrapper[4932]: E0218 21:03:11.182535 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:03:11 crc kubenswrapper[4932]: E0218 21:03:11.183702 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:03:15 crc kubenswrapper[4932]: E0218 21:03:15.187468 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:03:15 crc kubenswrapper[4932]: E0218 21:03:15.187635 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:03:20 crc kubenswrapper[4932]: E0218 21:03:20.182162 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:03:24 crc kubenswrapper[4932]: I0218 21:03:24.180426 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:03:24 crc kubenswrapper[4932]: E0218 21:03:24.181665 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:03:26 crc kubenswrapper[4932]: E0218 21:03:26.182531 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:03:27 crc kubenswrapper[4932]: E0218 21:03:27.198413 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:03:28 crc kubenswrapper[4932]: E0218 21:03:28.182334 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:03:31 crc kubenswrapper[4932]: E0218 21:03:31.183409 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:03:38 crc kubenswrapper[4932]: I0218 21:03:38.180863 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:03:38 crc kubenswrapper[4932]: E0218 21:03:38.181985 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:03:38 crc kubenswrapper[4932]: E0218 21:03:38.182388 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:03:40 crc kubenswrapper[4932]: E0218 21:03:40.181009 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:03:42 crc kubenswrapper[4932]: E0218 21:03:42.181576 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:03:42 crc kubenswrapper[4932]: E0218 21:03:42.181815 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:03:49 crc kubenswrapper[4932]: I0218 21:03:49.183236 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:03:49 crc kubenswrapper[4932]: E0218 21:03:49.184399 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:03:50 crc kubenswrapper[4932]: E0218 21:03:50.182277 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:03:53 crc kubenswrapper[4932]: E0218 21:03:53.185064 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:03:53 crc kubenswrapper[4932]: E0218 21:03:53.187550 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:03:54 crc kubenswrapper[4932]: E0218 21:03:54.181952 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:04:03 crc kubenswrapper[4932]: I0218 21:04:03.181309 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:04:03 crc kubenswrapper[4932]: E0218 21:04:03.182455 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:04:04 crc kubenswrapper[4932]: E0218 21:04:04.182254 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:04:05 crc kubenswrapper[4932]: E0218 21:04:05.181243 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:04:06 crc kubenswrapper[4932]: E0218 21:04:06.184402 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:04:07 crc kubenswrapper[4932]: E0218 21:04:07.201006 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:04:14 crc kubenswrapper[4932]: I0218 21:04:14.180740 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:04:14 crc kubenswrapper[4932]: E0218 21:04:14.181889 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:04:17 crc kubenswrapper[4932]: E0218 21:04:17.197238 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:04:17 crc kubenswrapper[4932]: I0218 21:04:17.197474 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 21:04:18 crc kubenswrapper[4932]: E0218 21:04:18.182471 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:04:20 crc kubenswrapper[4932]: E0218 21:04:20.180918 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:04:25 crc kubenswrapper[4932]: I0218 21:04:25.180087 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:04:25 crc kubenswrapper[4932]: E0218 21:04:25.180892 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:04:25 crc kubenswrapper[4932]: I0218 21:04:25.644207 4932 generic.go:334] "Generic (PLEG): container finished" podID="2947758a-fd4b-4a4a-956a-41fefa7296a0" containerID="b8cc57bfeb38d618854d30ad2a0303534b4a0674c797bd1d7dcd4db1e8159186" exitCode=0 Feb 18 21:04:25 crc kubenswrapper[4932]: I0218 21:04:25.644326 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2947758a-fd4b-4a4a-956a-41fefa7296a0","Type":"ContainerDied","Data":"b8cc57bfeb38d618854d30ad2a0303534b4a0674c797bd1d7dcd4db1e8159186"} Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.125574 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236237 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-config-data\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236343 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236389 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-temporary\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236430 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ca-certs\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236505 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ssh-key\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236547 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config-secret\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236604 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-workdir\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236635 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236671 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fhls\" (UniqueName: \"kubernetes.io/projected/2947758a-fd4b-4a4a-956a-41fefa7296a0-kube-api-access-7fhls\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.238568 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.238728 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-config-data" (OuterVolumeSpecName: "config-data") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.244148 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.258333 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.258326 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2947758a-fd4b-4a4a-956a-41fefa7296a0-kube-api-access-7fhls" (OuterVolumeSpecName: "kube-api-access-7fhls") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "kube-api-access-7fhls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.286009 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.288301 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.302650 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.308210 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: E0218 21:04:27.320908 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 502 Bad Gateway" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 21:04:27 crc kubenswrapper[4932]: E0218 21:04:27.321126 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 502 Bad Gateway" logger="UnhandledError" Feb 18 21:04:27 crc kubenswrapper[4932]: E0218 21:04:27.322464 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 502 Bad Gateway\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339094 4932 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339129 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fhls\" (UniqueName: \"kubernetes.io/projected/2947758a-fd4b-4a4a-956a-41fefa7296a0-kube-api-access-7fhls\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339140 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339159 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339185 4932 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339196 4932 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339205 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339213 4932 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339222 4932 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.360216 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.441700 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.671537 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2947758a-fd4b-4a4a-956a-41fefa7296a0","Type":"ContainerDied","Data":"8c283e884b8f80bf01f3a12151451c0769e806057cd6f8d4c57d644f30012eb1"} Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.671599 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c283e884b8f80bf01f3a12151451c0769e806057cd6f8d4c57d644f30012eb1" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.671637 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 21:04:30 crc kubenswrapper[4932]: E0218 21:04:30.181875 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:04:31 crc kubenswrapper[4932]: E0218 21:04:31.183689 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.348041 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 21:04:32 crc kubenswrapper[4932]: E0218 21:04:32.348719 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b3cde9-8940-4757-8073-9f90910d6a30" containerName="keystone-cron" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.348761 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b3cde9-8940-4757-8073-9f90910d6a30" containerName="keystone-cron" Feb 18 21:04:32 crc kubenswrapper[4932]: E0218 21:04:32.348792 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2947758a-fd4b-4a4a-956a-41fefa7296a0" containerName="tempest-tests-tempest-tests-runner" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.348801 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2947758a-fd4b-4a4a-956a-41fefa7296a0" containerName="tempest-tests-tempest-tests-runner" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.349261 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2947758a-fd4b-4a4a-956a-41fefa7296a0" containerName="tempest-tests-tempest-tests-runner" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.349321 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b3cde9-8940-4757-8073-9f90910d6a30" containerName="keystone-cron" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.350714 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.353627 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bccj2" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.361461 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.477423 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"93c5c98c-cc87-4938-982e-54d3e1663dda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.477508 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dqq\" (UniqueName: \"kubernetes.io/projected/93c5c98c-cc87-4938-982e-54d3e1663dda-kube-api-access-z4dqq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"93c5c98c-cc87-4938-982e-54d3e1663dda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.579413 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"93c5c98c-cc87-4938-982e-54d3e1663dda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.579508 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4dqq\" (UniqueName: \"kubernetes.io/projected/93c5c98c-cc87-4938-982e-54d3e1663dda-kube-api-access-z4dqq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"93c5c98c-cc87-4938-982e-54d3e1663dda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.580573 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"93c5c98c-cc87-4938-982e-54d3e1663dda\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.609603 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4dqq\" (UniqueName: \"kubernetes.io/projected/93c5c98c-cc87-4938-982e-54d3e1663dda-kube-api-access-z4dqq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"93c5c98c-cc87-4938-982e-54d3e1663dda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.632497 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"93c5c98c-cc87-4938-982e-54d3e1663dda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.678043 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:33 crc kubenswrapper[4932]: E0218 21:04:33.181768 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:04:33 crc kubenswrapper[4932]: I0218 21:04:33.219049 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 21:04:33 crc kubenswrapper[4932]: I0218 21:04:33.752036 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"93c5c98c-cc87-4938-982e-54d3e1663dda","Type":"ContainerStarted","Data":"4d89e3959819723e86718f91888f7bdd71db7dc81fc96cabec42c9f19d6ae047"} Feb 18 21:04:36 crc kubenswrapper[4932]: I0218 21:04:36.180536 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:04:36 crc kubenswrapper[4932]: I0218 21:04:36.794239 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"ede191f8512d552532e9e192938bd11d4065e409a6f674bff66d945fc49b0e49"} Feb 18 21:04:38 crc kubenswrapper[4932]: E0218 21:04:38.181236 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:04:41 crc kubenswrapper[4932]: E0218 21:04:41.184421 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:04:43 crc kubenswrapper[4932]: E0218 21:04:43.183061 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:04:48 crc kubenswrapper[4932]: E0218 21:04:48.182486 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:04:52 crc kubenswrapper[4932]: E0218 21:04:52.181164 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:04:53 crc kubenswrapper[4932]: E0218 21:04:53.180941 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:04:53 crc kubenswrapper[4932]: E0218 21:04:53.767241 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/quay/busybox:latest" Feb 18 21:04:53 crc kubenswrapper[4932]: E0218 21:04:53.767750 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(93c5c98c-cc87-4938-982e-54d3e1663dda): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:04:53 crc kubenswrapper[4932]: E0218 21:04:53.769025 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:04:53 crc kubenswrapper[4932]: E0218 21:04:53.994539 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:04:54 crc kubenswrapper[4932]: E0218 21:04:54.182519 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:05:01 crc kubenswrapper[4932]: E0218 21:05:01.182628 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:05:04 crc kubenswrapper[4932]: E0218 21:05:04.183675 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:05:08 crc kubenswrapper[4932]: E0218 21:05:08.182656 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:05:09 crc kubenswrapper[4932]: E0218 21:05:09.184931 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:05:13 crc kubenswrapper[4932]: E0218 21:05:13.183761 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:05:18 crc kubenswrapper[4932]: E0218 21:05:18.183311 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:05:23 crc kubenswrapper[4932]: E0218 21:05:23.202410 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:05:23 crc kubenswrapper[4932]: E0218 21:05:23.202420 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:05:28 crc kubenswrapper[4932]: E0218 21:05:28.182224 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:05:29 crc kubenswrapper[4932]: E0218 21:05:29.293445 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/quay/busybox:latest" Feb 18 21:05:29 crc kubenswrapper[4932]: E0218 21:05:29.293963 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(93c5c98c-cc87-4938-982e-54d3e1663dda): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:05:29 crc kubenswrapper[4932]: E0218 21:05:29.295334 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:05:31 crc kubenswrapper[4932]: E0218 21:05:31.183165 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:05:35 crc kubenswrapper[4932]: E0218 21:05:35.184545 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:05:37 crc kubenswrapper[4932]: E0218 21:05:37.199037 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:05:40 crc kubenswrapper[4932]: E0218 21:05:40.181760 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:05:44 crc kubenswrapper[4932]: E0218 21:05:44.183720 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:05:45 crc kubenswrapper[4932]: E0218 21:05:45.181734 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:05:47 crc kubenswrapper[4932]: E0218 21:05:47.193886 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:05:49 crc kubenswrapper[4932]: E0218 21:05:49.181904 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:05:55 crc kubenswrapper[4932]: E0218 21:05:55.186487 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:06:00 crc kubenswrapper[4932]: E0218 21:06:00.181161 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:06:01 crc kubenswrapper[4932]: E0218 21:06:01.188278 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:06:02 crc kubenswrapper[4932]: E0218 21:06:02.181531 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:06:07 crc kubenswrapper[4932]: E0218 21:06:07.194216 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:06:12 crc kubenswrapper[4932]: E0218 21:06:12.336050 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/quay/busybox:latest" Feb 18 21:06:12 crc kubenswrapper[4932]: E0218 21:06:12.337208 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(93c5c98c-cc87-4938-982e-54d3e1663dda): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:06:12 crc kubenswrapper[4932]: E0218 21:06:12.338539 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:06:14 crc kubenswrapper[4932]: E0218 21:06:14.181991 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:06:16 crc kubenswrapper[4932]: E0218 21:06:16.183059 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:06:19 crc kubenswrapper[4932]: E0218 21:06:19.182745 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:06:23 crc kubenswrapper[4932]: E0218 21:06:23.545076 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/community-operator-index: received unexpected HTTP status: 504 Gateway Timeout" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 21:06:23 crc kubenswrapper[4932]: E0218 21:06:23.545785 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/community-operator-index: received unexpected HTTP status: 504 Gateway Timeout" logger="UnhandledError" Feb 18 21:06:23 crc kubenswrapper[4932]: E0218 21:06:23.547127 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/community-operator-index: received unexpected HTTP status: 504 Gateway Timeout\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:06:25 crc kubenswrapper[4932]: E0218 21:06:25.182501 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:06:27 crc kubenswrapper[4932]: E0218 21:06:27.193744 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:06:30 crc kubenswrapper[4932]: E0218 21:06:30.182553 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:06:33 crc kubenswrapper[4932]: E0218 21:06:33.184269 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:06:36 crc kubenswrapper[4932]: E0218 21:06:36.186319 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:06:39 crc kubenswrapper[4932]: E0218 21:06:39.182758 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:06:40 crc kubenswrapper[4932]: E0218 21:06:40.182029 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:06:41 crc kubenswrapper[4932]: E0218 21:06:41.181754 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:06:46 crc kubenswrapper[4932]: E0218 21:06:46.183016 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:06:50 crc kubenswrapper[4932]: E0218 21:06:50.182199 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:06:51 crc kubenswrapper[4932]: E0218 21:06:51.195660 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:06:51 crc kubenswrapper[4932]: E0218 21:06:51.196495 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:06:56 crc kubenswrapper[4932]: E0218 21:06:56.184104 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:06:57 crc kubenswrapper[4932]: I0218 21:06:57.605794 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:06:57 crc kubenswrapper[4932]: I0218 21:06:57.606370 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:06:58 crc kubenswrapper[4932]: E0218 21:06:58.182688 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:07:02 crc kubenswrapper[4932]: E0218 21:07:02.181974 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:07:03 crc kubenswrapper[4932]: E0218 21:07:03.183379 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:07:07 crc kubenswrapper[4932]: E0218 21:07:07.191635 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:07:10 crc kubenswrapper[4932]: E0218 21:07:10.184446 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:07:14 crc kubenswrapper[4932]: E0218 21:07:14.184068 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.367716 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qrwmf/must-gather-8wp6g"] Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.369637 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.371325 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qrwmf"/"kube-root-ca.crt" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.371865 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qrwmf"/"default-dockercfg-kwmlf" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.372061 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qrwmf"/"openshift-service-ca.crt" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.381569 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qrwmf/must-gather-8wp6g"] Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.471098 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-must-gather-output\") pod \"must-gather-8wp6g\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.471702 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9pzg\" (UniqueName: \"kubernetes.io/projected/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-kube-api-access-l9pzg\") pod \"must-gather-8wp6g\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.573078 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9pzg\" (UniqueName: \"kubernetes.io/projected/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-kube-api-access-l9pzg\") pod \"must-gather-8wp6g\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.573206 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-must-gather-output\") pod \"must-gather-8wp6g\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.573767 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-must-gather-output\") pod \"must-gather-8wp6g\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.592813 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9pzg\" (UniqueName: \"kubernetes.io/projected/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-kube-api-access-l9pzg\") pod \"must-gather-8wp6g\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.699186 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:16 crc kubenswrapper[4932]: I0218 21:07:16.224772 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qrwmf/must-gather-8wp6g"] Feb 18 21:07:16 crc kubenswrapper[4932]: I0218 21:07:16.834381 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qrwmf/must-gather-8wp6g" event={"ID":"dcf83976-1f0c-4cf7-91d8-3f0def01fe46","Type":"ContainerStarted","Data":"258a813aa68937edd52a807f5c0f7e594bcd8054419de7e27604d62fdf4c4a65"} Feb 18 21:07:17 crc kubenswrapper[4932]: E0218 21:07:17.196920 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:07:19 crc kubenswrapper[4932]: E0218 21:07:19.213449 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:07:21 crc kubenswrapper[4932]: E0218 21:07:21.182667 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:07:22 crc kubenswrapper[4932]: E0218 21:07:22.295114 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/quay/busybox:latest" Feb 18 21:07:22 crc kubenswrapper[4932]: E0218 21:07:22.296673 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(93c5c98c-cc87-4938-982e-54d3e1663dda): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:07:22 crc kubenswrapper[4932]: E0218 21:07:22.297916 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:07:27 crc kubenswrapper[4932]: E0218 21:07:27.200337 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:07:27 crc kubenswrapper[4932]: I0218 21:07:27.606591 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:07:27 crc kubenswrapper[4932]: I0218 21:07:27.606947 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:07:34 crc kubenswrapper[4932]: E0218 21:07:34.182324 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:07:34 crc kubenswrapper[4932]: E0218 21:07:34.182365 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:07:36 crc kubenswrapper[4932]: E0218 21:07:36.182962 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:07:36 crc kubenswrapper[4932]: E0218 21:07:36.338876 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openstack-k8s-operators/openstack-must-gather:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Feb 18 21:07:36 crc kubenswrapper[4932]: E0218 21:07:36.339148 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 21:07:36 crc kubenswrapper[4932]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Feb 18 21:07:36 crc kubenswrapper[4932]: HAVE_SESSION_TOOLS=true Feb 18 21:07:36 crc kubenswrapper[4932]: else Feb 18 21:07:36 crc kubenswrapper[4932]: HAVE_SESSION_TOOLS=false Feb 18 21:07:36 crc kubenswrapper[4932]: fi Feb 18 21:07:36 crc kubenswrapper[4932]: Feb 18 21:07:36 crc kubenswrapper[4932]: Feb 18 21:07:36 crc kubenswrapper[4932]: echo "[disk usage checker] Started" Feb 18 21:07:36 crc kubenswrapper[4932]: target_dir="/must-gather" Feb 18 21:07:36 crc kubenswrapper[4932]: usage_percentage_limit="80" Feb 18 21:07:36 crc kubenswrapper[4932]: while true; do Feb 18 21:07:36 crc kubenswrapper[4932]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Feb 18 21:07:36 crc kubenswrapper[4932]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Feb 18 21:07:36 crc kubenswrapper[4932]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Feb 18 21:07:36 crc kubenswrapper[4932]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Feb 18 21:07:36 crc kubenswrapper[4932]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 21:07:36 crc kubenswrapper[4932]: ps -o sess --no-headers | sort -u | while read sid; do Feb 18 21:07:36 crc kubenswrapper[4932]: [[ "$sid" -eq "${$}" ]] && continue Feb 18 21:07:36 crc kubenswrapper[4932]: pkill --signal SIGKILL --session "$sid" Feb 18 21:07:36 crc kubenswrapper[4932]: done Feb 18 21:07:36 crc kubenswrapper[4932]: else Feb 18 21:07:36 crc kubenswrapper[4932]: kill 0 Feb 18 21:07:36 crc kubenswrapper[4932]: fi Feb 18 21:07:36 crc kubenswrapper[4932]: exit 1 Feb 18 21:07:36 crc kubenswrapper[4932]: fi Feb 18 21:07:36 crc kubenswrapper[4932]: sleep 5 Feb 18 21:07:36 crc kubenswrapper[4932]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 21:07:36 crc kubenswrapper[4932]: setsid -w bash <<-MUSTGATHER_EOF Feb 18 21:07:36 crc kubenswrapper[4932]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 21:07:36 crc kubenswrapper[4932]: MUSTGATHER_EOF Feb 18 21:07:36 crc kubenswrapper[4932]: else Feb 18 21:07:36 crc kubenswrapper[4932]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 21:07:36 crc kubenswrapper[4932]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9pzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-8wp6g_openshift-must-gather-qrwmf(dcf83976-1f0c-4cf7-91d8-3f0def01fe46): ErrImagePull: initializing source docker://quay.io/openstack-k8s-operators/openstack-must-gather:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out Feb 18 21:07:36 crc kubenswrapper[4932]: > logger="UnhandledError" Feb 18 21:07:36 crc kubenswrapper[4932]: E0218 21:07:36.350666 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"initializing source docker://quay.io/openstack-k8s-operators/openstack-must-gather:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-qrwmf/must-gather-8wp6g" podUID="dcf83976-1f0c-4cf7-91d8-3f0def01fe46" Feb 18 21:07:37 crc kubenswrapper[4932]: E0218 21:07:37.092484 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-qrwmf/must-gather-8wp6g" podUID="dcf83976-1f0c-4cf7-91d8-3f0def01fe46" Feb 18 21:07:39 crc kubenswrapper[4932]: E0218 21:07:39.182429 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:07:40 crc kubenswrapper[4932]: E0218 21:07:40.365083 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 21:07:40 crc kubenswrapper[4932]: E0218 21:07:40.365364 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout" logger="UnhandledError" Feb 18 21:07:40 crc kubenswrapper[4932]: E0218 21:07:40.366635 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:07:45 crc kubenswrapper[4932]: E0218 21:07:45.184047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:07:50 crc kubenswrapper[4932]: E0218 21:07:50.180677 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.104941 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qrwmf/must-gather-8wp6g"] Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.116240 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qrwmf/must-gather-8wp6g"] Feb 18 21:07:51 crc kubenswrapper[4932]: E0218 21:07:51.184194 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.592865 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.724823 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-must-gather-output\") pod \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.725204 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9pzg\" (UniqueName: \"kubernetes.io/projected/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-kube-api-access-l9pzg\") pod \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.725244 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "dcf83976-1f0c-4cf7-91d8-3f0def01fe46" (UID: "dcf83976-1f0c-4cf7-91d8-3f0def01fe46"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.726082 4932 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.730434 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-kube-api-access-l9pzg" (OuterVolumeSpecName: "kube-api-access-l9pzg") pod "dcf83976-1f0c-4cf7-91d8-3f0def01fe46" (UID: "dcf83976-1f0c-4cf7-91d8-3f0def01fe46"). InnerVolumeSpecName "kube-api-access-l9pzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.828972 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9pzg\" (UniqueName: \"kubernetes.io/projected/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-kube-api-access-l9pzg\") on node \"crc\" DevicePath \"\"" Feb 18 21:07:52 crc kubenswrapper[4932]: I0218 21:07:52.293142 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:53 crc kubenswrapper[4932]: I0218 21:07:53.192732 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcf83976-1f0c-4cf7-91d8-3f0def01fe46" path="/var/lib/kubelet/pods/dcf83976-1f0c-4cf7-91d8-3f0def01fe46/volumes" Feb 18 21:07:54 crc kubenswrapper[4932]: E0218 21:07:54.181944 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:07:57 crc kubenswrapper[4932]: I0218 21:07:57.606782 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:07:57 crc kubenswrapper[4932]: I0218 21:07:57.607479 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:07:57 crc kubenswrapper[4932]: I0218 21:07:57.607549 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 21:07:57 crc kubenswrapper[4932]: I0218 21:07:57.608771 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ede191f8512d552532e9e192938bd11d4065e409a6f674bff66d945fc49b0e49"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 21:07:57 crc kubenswrapper[4932]: I0218 21:07:57.608873 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://ede191f8512d552532e9e192938bd11d4065e409a6f674bff66d945fc49b0e49" gracePeriod=600 Feb 18 21:07:58 crc kubenswrapper[4932]: E0218 21:07:58.179843 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:07:58 crc kubenswrapper[4932]: I0218 21:07:58.359391 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="ede191f8512d552532e9e192938bd11d4065e409a6f674bff66d945fc49b0e49" exitCode=0 Feb 18 21:07:58 crc kubenswrapper[4932]: I0218 21:07:58.359437 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"ede191f8512d552532e9e192938bd11d4065e409a6f674bff66d945fc49b0e49"} Feb 18 21:07:58 crc kubenswrapper[4932]: I0218 21:07:58.359465 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4"} Feb 18 21:07:58 crc kubenswrapper[4932]: I0218 21:07:58.359480 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:08:01 crc kubenswrapper[4932]: E0218 21:08:01.183625 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:08:05 crc kubenswrapper[4932]: E0218 21:08:05.182105 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:08:06 crc kubenswrapper[4932]: E0218 21:08:06.183341 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:08:07 crc kubenswrapper[4932]: E0218 21:08:07.332033 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 21:08:07 crc kubenswrapper[4932]: E0218 21:08:07.332539 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:08:07 crc kubenswrapper[4932]: E0218 21:08:07.333876 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:08:11 crc kubenswrapper[4932]: E0218 21:08:11.183995 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:08:12 crc kubenswrapper[4932]: E0218 21:08:12.181559 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:08:16 crc kubenswrapper[4932]: E0218 21:08:16.183567 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:08:17 crc kubenswrapper[4932]: E0218 21:08:17.188053 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:08:21 crc kubenswrapper[4932]: E0218 21:08:21.187595 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:08:24 crc kubenswrapper[4932]: E0218 21:08:24.184716 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:08:26 crc kubenswrapper[4932]: E0218 21:08:26.181988 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:08:28 crc kubenswrapper[4932]: E0218 21:08:28.188855 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:08:29 crc kubenswrapper[4932]: E0218 21:08:29.181561 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:08:36 crc kubenswrapper[4932]: E0218 21:08:36.180986 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:08:36 crc kubenswrapper[4932]: E0218 21:08:36.181208 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:08:39 crc kubenswrapper[4932]: E0218 21:08:39.182096 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:08:39 crc kubenswrapper[4932]: E0218 21:08:39.182100 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:08:42 crc kubenswrapper[4932]: E0218 21:08:42.182140 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:08:49 crc kubenswrapper[4932]: E0218 21:08:49.183373 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:08:50 crc kubenswrapper[4932]: E0218 21:08:50.181425 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:08:51 crc kubenswrapper[4932]: E0218 21:08:51.182760 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:08:53 crc kubenswrapper[4932]: E0218 21:08:53.183658 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:09:01 crc kubenswrapper[4932]: E0218 21:09:01.185424 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:09:01 crc kubenswrapper[4932]: E0218 21:09:01.186348 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:09:02 crc kubenswrapper[4932]: E0218 21:09:02.181467 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:09:06 crc kubenswrapper[4932]: E0218 21:09:06.182899 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:09:14 crc kubenswrapper[4932]: E0218 21:09:14.183770 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:09:14 crc kubenswrapper[4932]: E0218 21:09:14.184309 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:09:14 crc kubenswrapper[4932]: E0218 21:09:14.300351 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/quay/busybox:latest" Feb 18 21:09:14 crc kubenswrapper[4932]: E0218 21:09:14.300512 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(93c5c98c-cc87-4938-982e-54d3e1663dda): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:09:14 crc kubenswrapper[4932]: E0218 21:09:14.301779 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:09:17 crc kubenswrapper[4932]: E0218 21:09:17.194523 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:09:17 crc kubenswrapper[4932]: E0218 21:09:17.194737 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:09:26 crc kubenswrapper[4932]: E0218 21:09:26.182143 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:09:26 crc kubenswrapper[4932]: E0218 21:09:26.182701 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:09:27 crc kubenswrapper[4932]: E0218 21:09:27.213624 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:09:28 crc kubenswrapper[4932]: E0218 21:09:28.182520 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:09:31 crc kubenswrapper[4932]: E0218 21:09:31.182944 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:09:38 crc kubenswrapper[4932]: I0218 21:09:38.182398 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 21:09:40 crc kubenswrapper[4932]: E0218 21:09:40.181797 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:09:40 crc kubenswrapper[4932]: E0218 21:09:40.181810 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:09:41 crc kubenswrapper[4932]: E0218 21:09:41.182536 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:09:44 crc kubenswrapper[4932]: E0218 21:09:44.182406 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:09:53 crc kubenswrapper[4932]: E0218 21:09:53.184526 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:09:54 crc kubenswrapper[4932]: E0218 21:09:54.181899 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:09:55 crc kubenswrapper[4932]: E0218 21:09:55.190419 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:09:57 crc kubenswrapper[4932]: I0218 21:09:57.606708 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:09:57 crc kubenswrapper[4932]: I0218 21:09:57.607251 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:09:58 crc kubenswrapper[4932]: E0218 21:09:58.296353 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 21:09:58 crc kubenswrapper[4932]: E0218 21:09:58.296965 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:09:58 crc kubenswrapper[4932]: E0218 21:09:58.298283 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:09:59 crc kubenswrapper[4932]: E0218 21:09:59.184999 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:10:05 crc kubenswrapper[4932]: E0218 21:10:05.196477 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:10:05 crc kubenswrapper[4932]: E0218 21:10:05.196919 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:10:07 crc kubenswrapper[4932]: E0218 21:10:07.204059 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:10:11 crc kubenswrapper[4932]: E0218 21:10:11.182525 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:10:12 crc kubenswrapper[4932]: E0218 21:10:12.183062 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:10:16 crc kubenswrapper[4932]: E0218 21:10:16.181314 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:10:17 crc kubenswrapper[4932]: E0218 21:10:17.193593 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:10:22 crc kubenswrapper[4932]: E0218 21:10:22.183781 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:10:23 crc kubenswrapper[4932]: E0218 21:10:23.184500 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:10:25 crc kubenswrapper[4932]: E0218 21:10:25.181134 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:10:27 crc kubenswrapper[4932]: I0218 21:10:27.606808 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:10:27 crc kubenswrapper[4932]: I0218 21:10:27.607419 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:10:30 crc kubenswrapper[4932]: E0218 21:10:30.181521 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:10:31 crc kubenswrapper[4932]: E0218 21:10:31.182537 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:10:34 crc kubenswrapper[4932]: E0218 21:10:34.181244 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:10:37 crc kubenswrapper[4932]: E0218 21:10:37.197650 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:10:40 crc kubenswrapper[4932]: E0218 21:10:40.183940 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:10:41 crc kubenswrapper[4932]: E0218 21:10:41.181402 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:10:44 crc kubenswrapper[4932]: E0218 21:10:44.185768 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:10:48 crc kubenswrapper[4932]: E0218 21:10:48.183969 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:10:49 crc kubenswrapper[4932]: E0218 21:10:49.182055 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:10:52 crc kubenswrapper[4932]: E0218 21:10:52.181998 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:10:55 crc kubenswrapper[4932]: E0218 21:10:55.188475 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:10:55 crc kubenswrapper[4932]: E0218 21:10:55.188550 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:10:57 crc kubenswrapper[4932]: I0218 21:10:57.606702 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:10:57 crc kubenswrapper[4932]: I0218 21:10:57.608104 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:10:57 crc kubenswrapper[4932]: I0218 21:10:57.608233 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 21:10:57 crc kubenswrapper[4932]: I0218 21:10:57.610374 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 21:10:57 crc kubenswrapper[4932]: I0218 21:10:57.610523 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4" gracePeriod=600 Feb 18 21:10:57 crc kubenswrapper[4932]: E0218 21:10:57.759457 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:10:58 crc kubenswrapper[4932]: I0218 21:10:58.530841 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4" exitCode=0 Feb 18 21:10:58 crc kubenswrapper[4932]: I0218 21:10:58.530907 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4"} Feb 18 21:10:58 crc kubenswrapper[4932]: I0218 21:10:58.530956 4932 scope.go:117] "RemoveContainer" containerID="ede191f8512d552532e9e192938bd11d4065e409a6f674bff66d945fc49b0e49" Feb 18 21:10:58 crc kubenswrapper[4932]: I0218 21:10:58.533328 4932 scope.go:117] "RemoveContainer" containerID="2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4" Feb 18 21:10:58 crc kubenswrapper[4932]: E0218 21:10:58.534071 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:11:00 crc kubenswrapper[4932]: E0218 21:11:00.182793 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:11:03 crc kubenswrapper[4932]: E0218 21:11:03.182323 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:11:05 crc kubenswrapper[4932]: E0218 21:11:05.182446 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:11:08 crc kubenswrapper[4932]: E0218 21:11:08.182610 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:11:09 crc kubenswrapper[4932]: E0218 21:11:09.180948 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:11:13 crc kubenswrapper[4932]: I0218 21:11:13.179343 4932 scope.go:117] "RemoveContainer" containerID="2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4" Feb 18 21:11:13 crc kubenswrapper[4932]: E0218 21:11:13.180214 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:11:13 crc kubenswrapper[4932]: E0218 21:11:13.182920 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:11:14 crc kubenswrapper[4932]: E0218 21:11:14.181402 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:11:19 crc kubenswrapper[4932]: E0218 21:11:19.182007 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:11:20 crc kubenswrapper[4932]: E0218 21:11:20.181324 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:11:23 crc kubenswrapper[4932]: E0218 21:11:23.183637 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:11:28 crc kubenswrapper[4932]: I0218 21:11:28.180094 4932 scope.go:117] "RemoveContainer" containerID="2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4" Feb 18 21:11:28 crc kubenswrapper[4932]: E0218 21:11:28.181681 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:11:28 crc kubenswrapper[4932]: E0218 21:11:28.182651 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:11:31 crc kubenswrapper[4932]: E0218 21:11:31.185300 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:11:31 crc kubenswrapper[4932]: E0218 21:11:31.185832 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:11:36 crc kubenswrapper[4932]: E0218 21:11:36.183863 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:11:42 crc kubenswrapper[4932]: E0218 21:11:42.182213 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:11:43 crc kubenswrapper[4932]: I0218 21:11:43.179389 4932 scope.go:117] "RemoveContainer" containerID="2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4" Feb 18 21:11:43 crc kubenswrapper[4932]: E0218 21:11:43.180406 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:11:43 crc kubenswrapper[4932]: E0218 21:11:43.182320 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:11:45 crc kubenswrapper[4932]: I0218 21:11:45.070713 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8slcg" event={"ID":"57dbf2a4-5676-4291-911d-00038d3c7c75","Type":"ContainerStarted","Data":"2f105c76612dc3a96d281da1fce1e43e2ded94f9b887314ca356b9ea079b29d3"} Feb 18 21:11:45 crc kubenswrapper[4932]: E0218 21:11:45.182089 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:11:46 crc kubenswrapper[4932]: I0218 21:11:46.087430 4932 generic.go:334] "Generic (PLEG): container finished" podID="57dbf2a4-5676-4291-911d-00038d3c7c75" containerID="2f105c76612dc3a96d281da1fce1e43e2ded94f9b887314ca356b9ea079b29d3" exitCode=0 Feb 18 21:11:46 crc kubenswrapper[4932]: I0218 21:11:46.087499 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8slcg" event={"ID":"57dbf2a4-5676-4291-911d-00038d3c7c75","Type":"ContainerDied","Data":"2f105c76612dc3a96d281da1fce1e43e2ded94f9b887314ca356b9ea079b29d3"} Feb 18 21:11:47 crc kubenswrapper[4932]: E0218 21:11:47.213587 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:11:53 crc kubenswrapper[4932]: I0218 21:11:53.201733 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8slcg" event={"ID":"57dbf2a4-5676-4291-911d-00038d3c7c75","Type":"ContainerStarted","Data":"539fac4c77ba462fe0c3446901e24faf13d7dcebdfd9ab6697d7e7a1a7aca9f3"} Feb 18 21:11:53 crc kubenswrapper[4932]: I0218 21:11:53.245144 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8slcg" podStartSLOduration=4.348723942 podStartE2EDuration="36m50.245111862s" podCreationTimestamp="2026-02-18 20:35:03 +0000 UTC" firstStartedPulling="2026-02-18 20:35:06.038124906 +0000 UTC m=+3669.620079761" lastFinishedPulling="2026-02-18 21:11:51.934512806 +0000 UTC m=+5875.516467681" observedRunningTime="2026-02-18 21:11:53.228871992 +0000 UTC m=+5876.810826867" watchObservedRunningTime="2026-02-18 21:11:53.245111862 +0000 UTC m=+5876.827066747" Feb 18 21:11:54 crc kubenswrapper[4932]: I0218 21:11:54.377942 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8slcg" Feb 18 21:11:54 crc kubenswrapper[4932]: I0218 21:11:54.378469 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8slcg" Feb 18 21:11:55 crc kubenswrapper[4932]: I0218 21:11:55.180303 4932 scope.go:117] "RemoveContainer" containerID="2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4" Feb 18 21:11:55 crc kubenswrapper[4932]: E0218 21:11:55.180770 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:11:55 crc kubenswrapper[4932]: I0218 21:11:55.462779 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" containerName="registry-server" probeResult="failure" output=< Feb 18 21:11:55 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 21:11:55 crc kubenswrapper[4932]: > Feb 18 21:11:57 crc kubenswrapper[4932]: E0218 21:11:57.202171 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:11:57 crc kubenswrapper[4932]: E0218 21:11:57.202201 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145425450024452 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145425450017367 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145411452016507 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145411453015460 5ustar corecore